IBM, Microsoft, Red Hat, Docker unite behind Google's Kubernetes as a container management system.
Google I/O 2014: 10 Big Developments
(Click image for larger view and slideshow.)
An unlikely group of allies has formed to promote Google's Kubernetes container migration and management system. Kubernetes means "helmsman" in Greek, and Google has used its extensive experience in running Linux containers to build the Kubernetes system.
It's now backed by IBM, Red Hat, Microsoft, and Docker. They will all work to ensure that container management is standardized as much as the surging Docker format for dictating the way container workloads will be built.
"Google has the best infrastructure in the world. Our infrastructure engine is the best that money can buy," boasts Craig McLuckie, product manager for Kubernetes and the Google Cloud Platform, which includes Google App Engine and Compute Engine.
Each member of the group will contribute developers and coordinate efforts to make Kubernetes a more general-purpose container management system, one that's been tested for production use. McLuckie claims Kubernetes can handle the provisioning of containers, migrate them, monitor them as they run, assess their operational status, and provide dynamic scheduling and load balancing.
"We're contributing heavily to the Docker project. We also want Kubernetes to go wherever you take a Docker container," McLuckie told us in an interview.
McLuckie says containers in the multi-tenant cloud will in most instances run inside a virtual machine to provide an added layer of isolation. Containers divide up a physical host into discrete sets of resources for each application, and many containers can run together on a single host. But they don't have enough defenses to shield themselves from active malware lurking in a neighboring container on the same host. So multi-tenant hosts will most likely assign a virtual machine to each customer, then run multiple Docker containers inside the VM, sacrificing some of the efficiency gains that play out more favorably in Google's own more homogenous environment.
Kubernetes means "helmsman" in Greek.
"Containers running under Kubernetes are not a replacement for virtual machine technology," says McLuckie. Rather, they are an alternative way to migrate and run workloads, one that is highly flexible about which target environments they may move into. The destination doesn't need much more than the designated Linux kernel that the workload was intended to run under. In addition, many containers from the same customer might run efficiently together inside one virtual machine, instead of each workload requiring the overhead of its own VM.
In addition, Kubernetes lets administrators assign priorities to workloads. If a customer-facing website or application gets more traffic, a larger share of the virtual machine's resources can be diverted to it rather than keeping a secondary load running at the same pace.
One of the appeals of containers is their ease of monitoring for demand and scaling up once demand appears. A cloud customer might have 10 containers identical to one that's running, but the 10 are doing nothing until traffic builds. The Kubernetes management system can then fire up additional containers and route traffic to them in much less time than it takes to fire up virtual machines, because the operating system for the additional containers was already running, with no additional copies needed. Each instance of a virtual machine must have its own operating system.
"We've decoupled the world of the application from the world of the operating system," says McLuckie, pointing out one of the chief differences between
Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive ... View Full Bio
Multicloud Infrastructure & Application ManagementEnterprise cloud adoption has evolved to the point where hybrid public/private cloud designs and use of multiple providers is common. Who among us has mastered provisioning resources in different clouds; allocating the right resources to each application; assigning applications to the "best" cloud provider based on performance or reliability requirements.