Amazon's Container Strategy, Examined - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Cloud // Infrastructure as a Service
10:05 AM
Charles Babcock
Charles Babcock
Connect Directly

Amazon's Container Strategy, Examined

Amazon believes it can win developers with new AWS Container Service, despite Google's big headstart.

IT's 10 Fastest-Growing Paychecks
IT's 10 Fastest-Growing Paychecks
(Click image for larger view and slideshow.)

Amazon Web Services launched an EC2 Container Service last week at its Re:Invent conference, once again proving that the company can respond to industry needs with a light-footedness that belies its size.

It also illustrates a new battleground in cloud services, one that pits Amazon against Google and Microsoft for developer loyalties. "Developers love containers," said CTO Werner Vogels from the stage at Re:Invent.

Google has been leading the charge, having used containers for its internal systems for a decade. It launches 2 billion containers a week and has offered, as open-source code, the only sophisticated container management system available. Google announced on Nov. 4 that Google Container Engine will help developers deploy Linux containers on App Engine.

Microsoft's response to Linux container popularity has been to recruit Docker to allow Windows applications to run natively in Docker containers. I'm not sure a Docker container for Windows will work exactly the same way that it does for Linux. Windows doesn't have the same modular structure that Linux does, but Microsoft says it'll make it work.

[Want to learn more about what Amazon revealed at Re:Invent? See Amazon Cloud: 10 New Insights.]

Amazon has refused to ignore the container hit parade or let it pass by unnoticed. In the past, it concentrated on providing core infrastructure services, moving up to background systems such as Relational Database System, Elastic Map Reduce, and Elastic Load Balancer. It's left it to third parties, such as Engine Yard or Heroku, to build out the development tools, platform, and supporting plumbing. At Re:Invent, Amazon changed that stance and plunged into the midst of developers' deployment concerns with the EC2 Container Service.

(Source: Amazon)
(Source: Amazon)

Amazon's container service, however, will not run containers where they run best, natively on a bare-metal machine. It will run customers' containerized workloads inside Amazon's version of a Xen virtual machine, so they'll lose some of the efficiencies associated with large-scale container use. Each virtual machine has its own operating system. Sans virtual machine, many containers are designed to run alongside each other under the host's operating system.

In contrast, Google's operations speed is based in part on running hundreds or thousands of containers together, natively on bare-metal servers with no virtual machine layer. But that's strictly for its own systems. When it comes to running customers' containerized workloads, Google App Engine runs them inside a virtual machine, using the KVM hypervisor. The virtual machine provides a necessary boundary from the next customer's containerized workload.

"All the cloud providers at this time run containers inside a virtual machine," said Alex Polvi, CEO of CoreOS, which produces the slimmed-down CoreOS version of Linux expressly for running containers.

That will eventually change as we learn more about the characteristics of isolation at the operating system level. The great wave of virtualization that's swept over enterprise data centers, led by VMware, has been based on workload isolation at the hardware level -- a hypervisor lifts the workload off the hardware and lets it move around, talking to the hardware where it lands through the hypervisor. Right now, container isolation as we know it poses too many risks of one workload inadvertently (or otherwise) glimpsing data or communications of another. In effect, containers are moving parts in the server memory, now occupying this address space and now that. They operate in a dynamically altering landscape. It would be about as easy for the human brain to manage this landscape as for the eye to maintain boundaries in a turning kaleidoscope.

That containers effectively maintain isolation is a tribute to their design and the abilities of the operating system. Will this process work in all circumstances, including when a knowledgeable intruder wishes to introduce malicious code to disrupt it? No one has come up with a definitive answer to that yet, and Docker itself cautions against running multiple containers that are of a sensitive nature in production. Someday, the answer is likely to be yes, and a fuller realization of container efficiencies will result. For now, every cloud relies on the hard, logical boundaries of a virtual machine.

But containers give developers a way to hand off their work from the coding table to the test and quality assurance lab. It gives them a way to move the fully tested system out of the data center where it was developed into a different one, with a high degree of probability that it will run there. It gives them the means to troubleshoot one part of a workload without disrupting others; they can modify one layer in the container. And if the new configuration doesn't work, a container management system can roll back changes to a version that's known to work. All of these things mean developers will have more time to focus on new code and spend less time on just getting things running.

Google Container Engine, which observers believe is based on Kubernetes open-source code, can do these things. So can Amazon EC2 Container Service, Vogels said onstage at Re:Invent.

CoreOS's Polvi believes that Google Container Engine will resemble OpenStack, with a large number of companies and contributors collaborating on developing it and, in the long run, many different services relying on it. Amazon's EC2 Container Service, on the other hand, is an Amazon product, and its source code, like most of Amazon's systems, will remain as a private domain. CoreOS, by the way, is supported on Amazon's EC2, and Docker containers are currently running under it there.

Sebastian Stadil, CEO of Scalr, a cloud front-end management firm, believes Google has too much knowledge and too much of a lead on Amazon for the latter to catch up overnight. Both Amazon EC2 Container Service and Google Container Engine can receive and schedule a container workload on a host, assigning the CPU, memory, and storage appropriate to the workload, he said. But Google is capable of doing more after that, in monitoring the container cluster and managing it for the benefit of the workload.

"It feels like Amazon rushed EC2 Container Service out the door in time for Re:Invent," Stadil said. "Until Amazon adds more cluster management, it will be inferior to Kubernetes" and Google Container Engine, he said.

Amazon's quick enlistment in containerization "is good for the Amazon ecosystem," said Michael Crandell, CEO of Rightscale, but it's too early to know whether a Google, Amazon, or Microsoft will be an ultimate beneficiary via their approach to Docker users.

"They all want to get at developers," and building automated systems in the cloud that recognize containers is now an important way of doing that, Crandell said. But containers don't address other cloud management issues, such as capture and analysis of server log files, monitoring application performance, or rightsizing the network around the workload.

But each provider's desire to maintain developer interest in its services is laying the groundwork for greater cloud interoperation in the long run. If each cloud recognizes what to do with Docker containers when they arrive, it matters less that there's no smooth path between that cloud and its chief competitor. The incompatible virtual machine formats that now mark cloud operations can fade into the background if Docker is accepted as a standard way to ship software between destinations, with a high frequency of success upon launch.

These conditions haven't existed in outsourcing environments of the past or in multi-tenant cloud services of the present. Amazon's quick plunge into container management says containers are here to stay, even if they must, for the time being, be isolated in virtual machines. Containers are leveling the playing field in a new way and empowering developers to view the cloud as their alternative data center -- and perhaps, someday soon, their main data center.

Our latest survey shows growing demand, fixed budgets, and good reason resellers and vendors must fight to remain relevant. One thing's for sure: The data center is poised for a wild ride, and no one wants to be left behind. Get the Research: 2014 State Of The Data Center report today. (Free registration required.)

Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive ... View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
<<   <   Page 2 / 2
User Rank: Author
11/17/2014 | 3:51:42 PM
Docker as translator/helper
"If each cloud recognizes what to do with Docker containers when they arrive, it matters less that there's no smooth path between that cloud and its chief competitor." Is this how you're thinking about Docker containers, readers? It's been one of the biggest cloud worries for years, the path between competing clouds and possible lock-in pain.
Brian Bartlett
Brian Bartlett,
User Rank: Strategist
11/17/2014 | 2:58:10 PM
Re: Limited options for containers in the cloud today
I have seen quite an uptake under Ubuntu (bare metal) although that's not my area of interest (Redhat). Just watching from the sidelines is what I consider fun no matter what may com of it.
Charlie Babcock
Charlie Babcock,
User Rank: Author
11/17/2014 | 12:59:27 PM
Limited options for containers in the cloud today
If you have a set of related applications that you've tested running together in containers on a single host, and that approach works safely, then your best cloud option is probably limited to those providers that offer dedicated bare metal hosts. And it would be best if they have CoreOS or Red Hat's Linux Atomic Host (still in beta) as selections for host operating systems. The chief candidates to do that are Rackspace and IBM SoftLayer. Otherwise, your Docker container is likely to be running inside a virtual machine somewhere.
<<   <   Page 2 / 2
IBM Puts Red Hat OpenShift to Work on Sports Data at US Open
Joao-Pierre S. Ruth, Senior Writer,  8/30/2019
IT Careers: 10 Places to Look for Great Developers
Cynthia Harvey, Freelance Journalist, InformationWeek,  9/4/2019
Cloud 2.0: A New Era for Public Cloud
Crystal Bedell, Technology Writer,  9/1/2019
White Papers
Register for InformationWeek Newsletters
Current Issue
Data Science and AI in the Fast Lane
This IT Trend Report will help you gain insight into how quickly and dramatically data science is influencing how enterprises are managed and where they will derive business success. Read the report today!
Flash Poll