For all that, the cloud remains a loosely defined term, or rather a form of computing with too many NIST definitions. That's because it sums up a set of innovations and possibilities that sit atop the new layer of virtualization in the data center. The cloud is as much about a change in style of managing computing resources and a new model of distribution as it is about technology itself.
At the moment, a defining characteristic is that it is characteristically offered through a self-service format. In the future, it may be offered through broad sets of automated services, where the user doesn't literally order up a server but a complete software stack based on how interactive he wishes to be with his computing source. In its extreme, the cloud will allow some users to modify an application to suit their needs or create an application, then run it on a high performance cluster.
But let's not get ahead of ourselves. The data center manager of today needs a way to map his existing infrastructure onto a more cloud-like set of operations. This is infinitely more easily said than done, given the resistance of mainframe, aging proprietary and Unix systems to entering the world of the x86 instruction set.
The AFCOM position paper guides this process in somewhat general terms, sketching out how to formulate a logical position on the desired cloud-type resources, then how to translate that position into physical data center resources. That still leaves a lot of work to be done by the systems architect, the network manager, the chief security officer and the operations manager.
But AFCOM is saying that work will soon become necessary. Don't listen to the cloud critics. They will impede your movement to a desirable and ultimately necessary new architecture.
Now that server virtualization is widespread, can we leverage it to make true business continuity a reality--or at least make recovering from disaster faster and less expensive? Download our report here (registration required).