Companies that want the benefits of cloud computing services without the risks are looking to create cloud-like environments in their own data centers. To do it, they'll need to add a layer of new technologies--virtualization management, cloud APIs, self-service portals, chargeback systems, and more--to existing data center systems and processes.
Be ready for a debate as you discuss this new way of doing things. Just the term "private cloud" irks some computer industry veterans, who argue that cloud computing by definition is something that happens outside of your data center, or that the technologies involved in private clouds have been around for years, or both. Even some of my InformationWeek colleagues pooh-pooh private clouds. "Nothing new under the sun," scoffed one editor.
It's true that no single piece of an internal cloud architecture looks like breakthrough technology; it all looks deceptively familiar. I would argue, however, that private clouds represent a convergence of tech trends holding great promise for enterprise computing. Private clouds are a more powerful combination of modular commodity hardware that can be sliced and diced into many small pieces, with networking and storage that can be dynamically allocated through preset policies.
A virtualization management layer treats that whole set of technologies as a combined resource, while Internet networking and Web services allow us to interact with the cloud from any location. We can create new services out of existing ones hosted in the cloud and run user workloads at the click of a button. End users were far removed from the old mainframe and Unix server data center; with clouds, the business user can become king. Creating a private cloud will take considerable IT skill, but once one is built, authorized business users will be able to tap that computing power without a lot of know-how.
The Department of Veterans Affairs has deployed a small internal cloud. It wanted an early-warning system that could analyze data from its 100-plus clinics and hospitals and spot outbreaks of infectious diseases, and it had to do so on a tight budget. The project, dubbed the Health Associated Infection and Influenza Surveillance System, was built on six standard blade servers with converged network and storage I/O. The CPUs can be managed individually or as a virtualized whole, with workloads shifted and capacity summoned as necessary.
The six-blade system runs Egenera's cloud management software, PAN Manager, which manages I/O, networking, and storage for the servers as a logical set. It can execute several applications, while always having enough horsepower to do its main job. The system's Dell blades and storage can be virtualized as a pooled resource in such a way that processing power can be devoted quickly to the VA's cloud, its highest-priority task. In many ways, the VA's new system anticipated Cisco's recently introduced "unified computing" platform, a virtualized, multiblade server chassis with converged I/O that Cisco touts as just the thing for cloud computing.
Some see a hard line between the public clouds operated by Amazon Web Services, Google, and Microsoft and mixed-use corporate data centers. Such a line used to exist between proprietary enterprise networks and the Internet, too. Yet internal intranets gradually offset some of the functions of enterprise networks because they were patterned on TCP/IP and, thus, were compatible with the Internet surrounding them. Standard TCP/IP ultimately replaced proprietary networks, and the Internet began to function as an extension of corporate networks.
A similar phenomenon could, and probably will, happen with cloud computing. If efficient external clouds such as Amazon's Elastic Compute Cloud are based on a few standards, why can't data centers start to be built out as internal clouds that more closely resemble them? And once the two start to match up in architecture, what's to prevent a workload in one from being exported to the other?
That's the concept known as a hybrid cloud--part public cloud service, part internal cloud--and Bob Muglia, president of Microsoft's server and tools division, expects many companies to move in this direction. "All of our customers will have Windows servers on premises and, over time, add usage of cloud services," he says.
But Muglia adds that hybrid clouds will be "super hard" to pull off when they involve applications that require true cross-cloud integration, not simply moving a virtualized application from a private cloud to a public cloud. "The hard part is moving all of the services attached to that workload," he says. Muglia's group hopes to solve that problem by incorporating technologies developed for Microsoft's Windows Azure cloud operating system into Windows Server, so that the two environments will not only resemble each other but also work together. For Microsoft, however, that work all lies ahead.
Stephen Brobst, CTO of data warehouse provider Teradata, foresees other complications. For instance, while it's technically feasible to run data warehouses in public clouds, there are privacy and governance concerns that make it almost inconceivable to do so with personal data, he says. The Health Insurance Portability and Accountability Act, Sarbanes-Oxley, and the credit card industry's PCI standard put stringent controls on personal data. Running a data warehouse on an internal cloud gets around those issues. Teradata customer eBay runs a 5PB data warehouse internally, adding 40 TB a day, on a grid of x86 servers.
That brings us to the realm of self-service portals, metering, and chargeback systems needed to make it possible to dole out IT resources on demand, measure consumption, and allocate expenses with increased granularity. The best way to set up such a system is with the virtual lab manager products that software developers use to provision servers, says Forrester Research analyst James Staten. VMware's vCenter Lab Manager, Citrix Systems' Lab Manager, and Surgient's Virtual Automation Platform all come with self-service portals.