Containers Explained: 9 Essentials You Need To Know - InformationWeek
IoT
IoT
IT Leadership // IT Strategy
Commentary
2/10/2015
09:00 AM
Charles Babcock
Charles Babcock
Commentary
Connect Directly
Twitter
RSS
100%
0%

Containers Explained: 9 Essentials You Need To Know

Containers are the hottest topic in the data center. Here are the essentials that every well-informed IT pro should know and be able to explain.

Robots: We Love The Crazy Things They Do
Robots: We Love The Crazy Things They Do
(Click image for larger view and slideshow.)

Containers are the hottest trend in data center innovation -- even if not everyone involved in the discussion can articulate exactly why. If you've wondered whether containers might fit your data center strategy, or if you know they do and need a tool to explain containers to your peers, this guide's for you.

At the most basic level, containers let you pack more computing workloads onto a single server, and let you rev up capacity for new computing jobs in a split second. In theory, that means you can buy less hardware, build or rent less data center space, and hire fewer people to manage that gear. Containers are different from virtual machines -- which you probably already run using VMware or Microsoft Hyper-V virtualization software, or open source options KVM or Zen. We'll explain some differences below.

Specifically, Linux containers give each application running on a server its own, isolated environment to run, but those containers all share the host server's operating system. Since a container doesn't have to load up an operating system, you can create containers in a split-second, rather than minutes for a virtual machine. That speed lets the data center respond very quickly if an application has a sudden spike in business activity, like people running more searches or ordering more products.

Here are nine essentials to know about containers. If you have others from your experience, please share in the comments below this article:

1. Containers are different from virtual machines

One way to contrast containers and VMs is to look at what's best about each: Containers are lightweight, requiring less memory space and delivering very fast launch times, while virtual machines offer the security of a dedicated operating system and harder logical boundaries. With a VM, a hypervisor talks to the hardware as if the virtual machine's operating system and application constituted a separate, physical machine. The operating system in the virtual machine can be completely different from the host operating system.

Containers offer higher-level isolation, with many applications running under the host operating system, all of them sharing certain operating system libraries and the operating system's kernel. There are proven barriers to keep running containers from colliding with each other, but there are some security concerns about that separation, which we'll explore later.

Just like this, for data center computing.
Just like this, for data center computing.

Both containers and virtual machines are highly portable, but in different ways. For virtual machines, the portability is between systems running the same hypervisor (usually VMware's ESX, Microsoft's Hyper-V, or open source Zen or KVM). Containers don't need a hypervisor, since they're bound to a certain version of an operating system. But an application in a container can move wherever there's a copy of that operating system available.

One big benefit of containers is the standard way applications are formatted to be placed in a container. Developers can use the same tools and workflows, regardless of target operating system. Once in the container, each type of application moves around the network the same way. In this way containers resemble virtual machines, which are also package files that can move over the Internet or internal networks.

We already have Linux, Solaris, and FreeBSD containers. Microsoft is working with Docker (the company behind the open source container project of the same name) to produce Windows containers.

An application inside a Docker container can't move to another operating system. Rather, it is moveable across the network in standard ways that make it easier to move software around data centers or between data centers. A single container will always be associated with a single version of the kernel of an operating system.

2. Containers are less mature than virtual machines

One glaring difference today between containers and virtual machines is that virtual machines are a highly developed and very mature technology, proven in running the most critical business workloads. Virtualization software vendors have created management systems to deal with hundreds or thousands of virtual machines, and those systems are designed to fit into the existing operations of the enterprise data center.

Containers have more a futuristic feel -- a young and promising technology that doesn't necessarily have every kink worked out. Developers are working on management systems to assign properties to a set of containers upon launch, or to group containers with similar needs for networking or security together, but they're still a work in progress.

Docker's original formatting engine is becoming a platform, with lots of tools and workflows attached. And containers are getting support from some of the larger tech vendors. IBM, Red Hat, Microsoft, and Docker all joined Google last July in the Kubernetes project, an open source container management system for managing Linux containers as a single system.

Docker has 730 contributors to its container platform. CoreOS is a Linux distribution aimed at running modern infrastructure stacks, and it's attracting developers to Rocket, a new container runtime.

[Want to learn about Google's interest in containers? See Google OKs Docker Container Registry Service.]

3. Containers boot in a fraction of a second

Containers can be created much faster than virtual machines because VMs must retrieve 10-20 GBs of an operating system from storage. The workload in the container uses the host server's operating system kernel, avoiding that step. Miles Ward, global head of solutions for Google's Cloud Platform, says containers can boot up in one-twentieth of a second.

Having that speed allows a development team to get project code activated, to test code in different ways, or to launch additional ecommerce capacity on its Web site -- all very quickly.

4. Containers have proven themselves on a massive scale -- such as in Google search

Google Search is the world's biggest implementer of Linux containers, which the company uses for internal operations. Google also is expert at hosting containers in its App Engine or Compute Engine services, but like other cloud suppliers, it puts containers from different customers into separate KVM virtual machines, because of the clearer boundaries between VMs.

For running Google Search operations, however, it uses containers by themselves, launching about 7,000 containers every second, which amounts to about 2 billion every week. Google supports search in different locations around the world, and levels of activity rise and fall in each data center according to the time of day and events affecting that part of the world. Containers are one of the secrets to the speed and smooth operation of the Google Search engine. That kind of example only spurs the growing interest in containers.

5. When IT folks call containers "lightweight," here's what they mean:

"Lightweight" in connection with containers means that, while many dozens of virtual machines can be put on a single host server, each running an application with its own operating system, hundreds or even thousands of containers can be loaded on a host. The containerized app shares the host's operating system kernel to execute work. Containers thus hold out the hope of becoming the ultimate form of intense computing for the space required and power consumed.

Over the last decade, virtualization represented an upheaval and consolidation of the one-application-per-server style of doing things. In the same sense, containers represent a potential upheaval of at least some virtualization workloads into an even more dense form of computing.

6. Containers raise security concerns

So are containers an unmitigated good? Hold your horses. Not much research has been published about the security of running, say, 1,200 containers side-by-side on a single server. One running container can't intrude upon or snoop on another's assigned memory space.

But what if two containers were allowed to talk to each other, and one of them was loaded with malicious code that snoops for encryption keys in the data that it's allowed to see? With so many things going on around it in shared memory, it might be only a matter of time before something valuable -- a user ID, a password, an encryption key -- fell into the malware's net.

Malicious code could also build up a general picture of what the linked container or containers were up to. Theoretically, this can't happen, because containers are designed to ensure the isolation of each application. But no one is sure whether computer scientists have envisioned and eliminated every circumstance where some form of malware snooping can occur.

Containers share CPU, memory, and disk in close proximity to each other, and that sort of thing worries security pros. It's likely, even though no one has done so on the record yet, that someone will find a way for code in one container to snoop on or steal data from another container.

7. Docker has become synonymous with containers, but it's not the only provider

Docker is a company that came up with a standard way to build out a container workload so that it could be moved around and still run in a predictable way in any container-ready environment.

All containers -- whether Linux-based Docker containers, Solaris Zones, or FreeBSD Jails – provide some form of isolation for an application running on a multi-application host. So why is all we hear these days Docker, Docker, Docker? The answer lies in the fact that Jails and Zones were indeed container pioneers, but their uptake was limited by the fact that comparatively few companies use Solaris and FreeBSD operating systems. They still enjoy use in the enterprise, but are only lightly used in public cloud settings.

When Google and the developers who created Linux Control Groups successfully got container functionality included in the Linux kernel, containers were instantly within reach of every business and government data center, given Linux's near-ubiquity.

[ Docker CEO Ben Golub predicts Less Controversy, More Container Adoption In 2015. ]

About the same time, Docker came along as a company. Developers understood that containers would be much more useful and portable if there was one way of creating them and moving them around, instead of having a proliferation of container formatting engines. Docker, at the moment, is that de facto standard.

They're just like shipping containers, as Docker's CEO Ben Golub likes to say. Every trucking firm, railroad, and marine shipyard knows how to pick up and move the standard shipping container. Docker containers are welcome the same way in a wide variety of computing environments. 

8. Containers can save IT labor, speed updates. Maybe ...

There's an advantage to running production code in containers. As Docker builds a workload, it does so in a way that sequences the files in a particular order that reflects how they will be booted.

One service or section of application logic needs to be fired up before another, and containers are actually built in layers that can be accessed independently of one another. A code change known to affect just one layer can be executed without touching the other layers. That makes changing the code less dangerous than in the typical monolithic application, where an error stalls the whole application.

It also makes it easier to modify the application. If the change occurs to one layer, it can be tested and launched into production. If a problem develops, it can be quickly rolled back, because developers only touched one layer.

At the same time, big applications can be broken down into many small ones, each in its own container. Gilt, the online site for discounted luxury goods, broke seven large applications down into 300 microservices, with a small team maintaining each service. These services can be updated more frequently and safely than the large applications could, according to CTO Michael Bryzek.

9. Containers still face some unresolved problems

Containers, and its champion vendor Docker, must overcome some lurking problems to gain mass adoption. As originally conceived, Docker was envisioned as a formatting engine for applications that run on a single computer. A container, or a series of linked containers, would still exist together on a single host. What if the application needs 10 servers, 100 servers, or even 1,000?

Docker still wants to think of one application running on one computer, but big enterprise IT shops -- think multinational banks, energy companies, automakers, and retailers -- want to know a tool can handle massive scale.

Google dealt with this problem by creating its own multi-container management system, the core of which now forms the open source Kubernetes Project. Kubernetes will let IT shops build a cluster to run containers, providing networking and a container-naming system that lets a container administrator manage many more containers at once. It lets them run big applications in multiple containers across many computers without needing to know all the ins and outs of container cluster management.

The project has been producing open source code only since July of last year, so there may be a way to go. But it's notable that Google has put code unique to running its operation into the public arena. IBM, Red Hat, and Microsoft, not to mention a couple of dozen startups specializing in container operations, are extremely interested in where it leads. VMware is likewise interested in Kubernetes and in working with Docker to ensure its virtualization environment works well with containers. Enterprise users will ultimately see the benefits of all this activity.

Conclusion

Those who are in constant pursuit of better, faster, and cheaper computing see a lot to like in containers. The CPU of an old-style, single application x86 server is utilized somewhere around 10% to 15%, at best. Virtualized servers push that utilization up to 40%, 50%, or in rare cases 60%.

Containers hold the promise of using an ever-higher percentage of the CPU, lowering the cost of computing and getting more bang for the hardware, power, and real estate invested. Containers also hold out the threat that if something goes wrong -- like a security breach -- it may go wrong in a much bigger way. That's why there's no rush to adopt containers everywhere. Containers are still things to use, but use with caution, and to proceed with deliberate, not breakneck, speed.

Attend Interop Las Vegas, the leading independent technology conference and expo series designed to inspire, inform, and connect the world's IT community. In 2015, look for all new programs, networking opportunities, and classes that will help you set your organization’s IT action plan. It happens April 27 to May 1. Register with Discount Code MPOIWK for $200 off Total Access & Conference Passes.

Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive ... View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
kstaron
50%
50%
kstaron,
User Rank: Ninja
2/25/2015 | 11:55:26 AM
More development time
Thanks for the post. It clarified exactly what a container is and how it might provide solutions. I'm an particularly interested in containerizing as a way to make upgrades run more smoothly. For smaller development teams where application upgrades take a larger percentage of hours, a way to make them get done faster with less downtime can allow more time for actual development instead of maintenance.
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
2/17/2015 | 6:40:03 PM
Linux kernel experts know Linux containers
Linux containers are on the agenda of the Linux Collaboration Summit this week (Feb. 18-20) in Santa Rosa, Calif. I hope to address the security issue with experts there. Google will also discuss its experience in using containers. Looking forward to it.
jagibbons
50%
50%
jagibbons,
User Rank: Ninja
2/16/2015 | 9:21:40 AM
Re: Containers are not the same as virtualization, but....
That is an important distinction, Charlie. This is not an either/or proposition. Both may have their place in a number of environments.
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
2/13/2015 | 1:51:33 PM
Containers are not the same as virtualization, but....
Containers are not a form of virtualization. They're a form of application isolation, which sounds a lot like virtualization but isn't. But to get the concept and importance of containers across, I recently found myself telling a group of listeners, containers represent the "re-virtualization" of the data center. I think the same ratio of applications to hosts will reoccur -- a greater concentration achieved -- through containers.
SaneIT
100%
0%
SaneIT,
User Rank: Ninja
2/11/2015 | 8:09:39 AM
Re: Containers Explained: 9 Essentials You Need To Know
" when you later mentioned applications that need tons of servers to run on, it just sounds like Virtual Machines (without containers) are the way to go for the time being"

 

While you and I may both feel that way I have to believe that there is a breaking point where containers make much more sense.  As the needs scale up every tiny bit of overhead that you can remove makes a difference so while you or I may look at VMs and see the more mature solution as the way to go big when you need to spread applications across many servers Google probably looks at it as just a matter of connecting those containers so that they can talk at the highest level necessary and manage the need for multiple servers that way.  It all depends on the tools you are working with and how willing you are to go out on a limb to build out a solution that works for you.

 
zerox203
100%
0%
zerox203,
User Rank: Ninja
2/10/2015 | 5:40:04 PM
Re: Containers Explained: 9 Essentials You Need To Know
This is definitely a topic that, to the outside eye, may look like an awful lot of hype and not a lot of substance. Indeed, while this article shed a lot of light on some aspects and is very much appreciated, it still sounds like there are a lot of blanks left to be filled in before containers see widespread adoption. For example, the differences between containers and virtual machines were well-explained and go a long way towards illuminating the core concept. At the same time, when you later mentioned applications that need tons of servers to run on, it just sounds like Virtual Machines (without containers) are the way to go for the time being. Google can take advantage of that maximized scale, but what about the rest of us? Thanks for including specific use cases, but I could've used some more to explain the shortcomings of containers and not only the benefits.

I think the security issue bears a little more discussion here. 'it seems great but the security is untested' was the rallying cry of anti-cloud sentiment not so long ago, and while they certainly had a leg to stand on, it turned out that most of those worries were unfounded. I definitely believe there could be 'issues' with the inherently connected nature of containers, but to overly cite vague 'security' without any specific known vulnerabilities seems like inviting a slippery slope kind of argument. Moreover, I'd think that how much these potential vulnerabilities affect you depends on who you are and what you're running on the containers. If you're in a compliance-focused industry like financial or healthcare, or if the containers are 'touching' customer data, then sure. The rest of us needn't be so squeamish. At least, let's wait until the reality becomes clear before making up our minds.
davidfcarr
100%
0%
davidfcarr,
User Rank: Strategist
2/10/2015 | 1:32:26 PM
Great explanation
Thank you for this post, Charles. The use of containers is one of those concepts I knew I ought to understand better and now I do. I particularly appreciated the care you took to explain the difference between these containers and virtualization, which is something I admit I was pretty foggy on.
Commentary
AI as a Human Right
Guest Commentary, Guest Commentary,  3/8/2019
News
How to Become a Master Scrum Master
John Edwards, Technology Journalist & Author,  2/28/2019
News
TaylorMade IT Spin-Off Taps Cloud Database
Jessica Davis, Senior Editor, Enterprise Apps,  2/15/2019
White Papers
Register for InformationWeek Newsletters
Video
Current Issue
Security and Privacy vs. Innovation: The Great Balancing Act
This InformationWeek IT Trend Report will help you better understand and address the growing challenge of balancing the need for innovation with the real-world threats and regulations.
Slideshows
Flash Poll