Google Unleashes Container Engine For Docker Workloads

The Google Compute Cloud has gained a service for launching Docker containers and managing their lifecycles.

Charles Babcock, Editor at Large, Cloud

August 27, 2015

5 Min Read
<p align="left">(Image: 4x6/iStockphoto)</p>

7 Worst Cloud Compliance Nightmares

7 Worst Cloud Compliance Nightmares


7 Worst Cloud Compliance Nightmares (Click image for larger view and slideshow.)

Google's strength in running large groups of containers will give its Compute Engine Infrastructure-as-a-Service (IaaS) offering added appeal to developers now that the company has taken its Google Container Engine out of its alpha phase and made it generally available.

Google Container Engine, which goes by the acronym GKE to avoid being confused with Google Compute Engine (GCE), orchestrates the launch and management of Docker containers on a cluster on Google Compute Engine.

It's basically similar to Amazon's Container Service (announced last November at Amazon Web Services' ReInvent show), which orchestrates the launch of Docker containers on EC2.

Either can build and deploy containers to a cluster and monitor their lifecycle there.

However, Google has put its Kubernetes orchestration and management system into its container engine and given it the concept of pods. Containers in a pod are all alike, which enables them to share data, provided they're deployed to a single cluster.

Google launches 2 billion containers a week and has deep experience in managing homogenous containers as a set, as opposed to the simpler task of distributing thousands of dissimilar containers across different servers and clusters.

Pods, when deployed correctly, can be used to help scale out microservices, allowing a set of containers to access a shared caching system or shared pool of long-term memory for speed of operation. The efficient use of containerized services leads to the rapid building and fast execution of microservice applications, sometimes referred to as next generation or "cloud native" applications.

"Everything at Google, from Search to Gmail, is packaged and run in a Linux container ... Container Engine represents the best of our experience," said Craig McLuckie, senior product manager for Google Compute Engine, in a blog announcing the change Wednesday, Aug. 26.

Google didn't start out using Docker containers, having come up with its own approach to Linux containers 10 years before Docker became popular. But McLuckie has previously been clear that Docker represents a de facto standard formatting engine for the rest of the industry, and Google will standardize on how it works, as opposed to trying to convert the world to its own approach.

At the Linux Collaboration Summit last February, McLuckie said, "Docker captured lightning in a bottle."

Google Container Engine also includes Replication Controllers, to manage the lifecycle of pods and ensure there are enough pods and containers to accomplish a given application service. It includes Services, or load balancers that abstract a set of related pods and route traffic to the right one in the set; and Labels, an identifier that Kubernetes uses to select homogenous pods to perform a common task.

"Many applications take advantage of multiple containers; for example, a Web application might have separate containers for the webserver, cache, and database. Container Engine ... makes it easy for your containers to work together as a single system," McLuckie wrote in his blog.

Container Engine is managed by Google reliability engineers, with infrastructure updates and continuous availability provided by them. A user defines the amount of CPU and memory to reserve, the number of replicas, networking, and the period of the keep-alive policy, and the engine does the rest.

Container Engine includes a scheduler, which launches commissioned containers into a virtual machine cluster, manages them based on the declared requirements, and kills them off at the end of their lifecycle.

Container Engine institutes server logging in a container cluster and container health monitoring for feedback on how well the application is running. It can also commission additional memory or CPU capacity for a given cluster to help it meet the traffic demand on an application.

[Want to learn about a recent Google cloud mishap? See Google Loses Data: Who Says Lightning Never Strikes Twice?]

Docker containers make it easier to move software around between clusters, data centers, or clouds. As it's emerged as a de facto standard, the Docker formatting approach has served largely as the model for the App C specification of the Open Container Initiative. CoreOS, a supplier of container host Linux, drew up the specification.

If CoreOS, Docker, and other container technology suppliers adhere to the spec, it will be a step toward making container operation on clouds more interoperable.

For that matter, a Docker container that is ready to be deployed by the Amazon Container Service today is also theoretically ready for operation on Google Compute Engine. Each container contains the elements needed in its deployment environment along with instructions for operating system services that it needs from the host.

At the OpenStack Silicon Valley gathering of OpenStack implementers in Mountain View, California, Wednesday, Aug. 26, McLuckie went a step further in his description of Docker. "Docker recognized the value of the stackable file system. You can just deploy it and it's great. It's good for the developer experience," he told the assembly at the Computer History Museum.

Docker creates "a really amazing first five hours" for a developer as he or she finishes writing code and gets it ready for deployments. But McLuckie added, "I'm worried about its operation for the next five years." For improvements on that front, Google and other cloud suppliers will keep working on their container management services.

About the Author

Charles Babcock

Editor at Large, Cloud

Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive Week. He is a graduate of Syracuse University where he obtained a bachelor's degree in journalism. He joined the publication in 2003.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights