Cloud // Platform as a Service
Commentary
4/17/2014
12:44 PM
Charles Babcock
Charles Babcock
Commentary
Connect Directly
Twitter
RSS
E-Mail
100%
0%

Red Hat Linux Containers: Not Just Recycled Ideas

Red Hat and its partner, Docker, bring DevOps characteristics to Linux containers, making them lighter-weight vehicles than virtual machines for cloud workloads.

Some people accuse Red Hat of dusting off an old idea, Linux containers, and presenting them as if they were something new. Well, I would acknowledge Sun Microsystems offered containers under Solaris years ago and the concept isn't new. But Docker and Red Hat together have been able to bring new packaging attributes to containers, making them an alternative that's likely to exist alongside virtual machines for moving workloads into the cloud.

And containers promise to fit more seamlessly into a DevOps world than virtual machines do. Containers can provide an automated way for the components to receive patches and updates -- without a system administrator's intervention. A workload sent out to the cloud a month ago may have had the Heartbleed vulnerability. When the same workload is sent in a container today, it's been fixed, even though a system administrator did nothing to correct it. The update was supplied to an open-source code module by the party responsible for it, and the updated version was automatically retrieved and integrated into the workload as it was containerized.

[Want to learn more about Linux containers? See Red Hat Announces Linux App Container Certification.]

That's one reason why Paul Cormier, Red Hat's president of products and technologies, at the Red Hat Summit this week, called containers an emerging technology "that will drive the future." He didn't specifically mention workload security, rather, he cited the increased mobility a workload gains when it's packaged inside a container. In theory at least, a containerized application can be sent to different clouds, with the container interface navigating the differences. The container checks with the host server to make sure it's running the Linux kernel that the application needs. The rest of the operating system is resident in the container itself.

Is that really much of an advantage? Aren't CPUs powerful enough and networks big enough to move the whole operating system with the application, the way virtual machines do? VMware is betting heavily on the efficacy of moving an ESX Server workload from the enterprise to a like environment in the cloud, the vCloud Hybrid Service. No need to worry about which Linux kernel is on the cloud server. The virtual machine has a complete operating system included with it.

Paul Cormier at Red Hat Summit 2014. (Source: Red Hat)
Paul Cormier at Red Hat Summit 2014.
(Source: Red Hat)

But that's one of the points in favor of containers, in my opinion. Sun used to boast how many applications could run under one version of Solaris. In effect, all the containerized applications on a Linux cloud host are sharing the host's Linux kernel and providing the rest of the Linux user-mode libraries themselves. That makes each container a smaller-sized, less-demanding workload on the host and allows more workloads per host.

Determining how many workloads per host is an inexact science. It will depend on how much of the operating system each workload originator decided to include in the container. But if a disciplined approach was taken and only needed class libraries were included, then a host server that can run 10 large VMs would be able to handle 100 containerized applications of similar caliber, said Red Hat CTO Brian Stevens Wednesday in a keynote at the Red Hat Summit.

It's the 10X efficiency factor, if Stevens is correct, that's going to command attention among Linux developers, enterprise system administrators, and cloud service providers. Red Hat Enterprise Linux is already a frequent choice in the cloud. It's not necessarily the first choice for development, where Ubuntu, Debian, and Suse may be used as often as Red Hat. When it comes to running production systems, however, Red Hat rules.

Red Hat has produced a version of Red Hat Enterprise Linux, dubbed Atomic Host, geared specifically to run Linux containers. Do we need another version of RHEL? Will containers really catch on? Will Red Hat succeed in injecting vigor into its OpenShift platform for developers through this container expertise?

We shall see. But the idea of containers addresses several issues that virtualization could not solve by itself. In the future, containers may be a second way to move workloads into the cloud when certain operating characteristics are sought, such as speed of delivery to the cloud, speed of initiation, and concentration of workloads using the same kernel on one host.

Can the trendy tech strategy of DevOps really bring peace between developers and IT operations -- and deliver faster, more reliable app creation and delivery? Also in the DevOps Challenge issue of InformationWeek: Execs charting digital business strategies can't afford to take Internet connectivity for granted.

Charles Babcock is an editor-at-large for InformationWeek, having joined the publication in 2003. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive Week. He is a graduate of Syracuse ... View Full Bio

Comment  | 
Print  | 
More Insights
Comments
Oldest First  |  Newest First  |  Threaded View
Page 1 / 4   >   >>
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
4/17/2014 | 1:12:51 PM
IDC software development analyst adds comments
IDC's Al Hilwa commented in an email message as this piece was posted:

"Red Hat sees Linux containers as the next big thing and an enabler for new cloud workloads... [Red Hat] is differentiating its OpenShift PaaS technology with the support of containers. This is a sound strategy because PaaS is in great need for standardized approaches to host workloads and sub-VM containers like Docker provide a standard mechanism for efficiently encapsulating an application and its libraries in a portable way. Red Hat is also producing a version of RHEL called Atomic which is specifically optimized to support containers in a lightweight fashion. In theory Docker and similar technologies enable density of workloads and thus more cost-efficient operation of cloud applications. This is a boon for hosters as the PaaS market expands because cloud economics is a key driver of cloud adoption.

"Containers are goodness for developers because of standardization and portability. Containers  are goodness for Red Hat, because it has embraced them  ahead of its competitors and because the bridge IaaS and PaaS capabilities in the same way that Red Hat has long positioned itself in the enterprise."
Laurianne
50%
50%
Laurianne,
User Rank: Author
4/17/2014 | 1:25:21 PM
Re: IDC software development analyst adds comments
Anywhere near a 10X efficiency factor compared to VMs would be significant. Great context on this news, Charlie.
rmerriam
50%
50%
rmerriam,
User Rank: Apprentice
4/17/2014 | 1:34:08 PM
Heat Savings
Reducing the number of servers is starting to be critical due to power and heat issues. A lighter footprint on a server means a reduction in how many servers are needed. 
Lorna Garey
50%
50%
Lorna Garey,
User Rank: Author
4/17/2014 | 2:14:36 PM
Only RHEL?
Charlie, Do you expect other network OSes to also support containers? 
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
4/17/2014 | 2:37:53 PM
Re: Only RHEL?
Lorna, I think it would have to be another server operating system, one with a big presence in the data center compared to Linux. That would be Windows. Microsoft may be compelled to consider containers, due to the advantage they bestow on Linux in the cloud era. But I don't think Windows can be adapted to containerization. That's something unique to the way Linux was built and the way the kernel works, along with open source packaging. Anyone else with thoughts on this?
ruvy
50%
50%
ruvy,
User Rank: Apprentice
4/17/2014 | 3:44:11 PM
What is old is new.
It's an interesting space, add a little devops magic to a 15 year old technology and voilà, you've got instant street cred. But the realities are a bit more complicated. Among the driving factors for the use of containization is the the bloat found within the very OS Red Hat sells. The old model of kernel space and user space are quickly becoming antiquated.

A quick refresher, Kernel space is strictly reserved for running privileged kernel, kernel extensions, and most device drivers or essentially the core OS. In contrast, user space is the memory area where application software and some drivers execute, where the magic happens. In the most basic terms, LXC is a userspace interface.

The logical question to ask is.. do we really need or want all that other stuff that comes included in the OS? Increasingly the answer is no. 
Andrew Binstock
50%
50%
Andrew Binstock,
User Rank: Author
4/17/2014 | 4:59:49 PM
Re: What is old is new.
Welil, if we're going to get into labeling new implementations as old wine in new bottles, let's give credit where credit is due: virtual containers, to my knowledge, first appeared commercially on IBM mainframes. However, the historical containers have little connection to today's implementations, because today's instances are tuned to an entirely different reality: cloud computing. More specifically, the ability to deploy and migrate VMs/containers quickly, which is a concept that simply did not exist on IBM or, IIRC, on Solaris servers.

So, yes, there are historical threads, but, no, they're not the same old thing just repackaged.
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
4/17/2014 | 6:00:34 PM
A description of the emerging Docker and Red Hat relationship
Ben Kepes has done the best description I've seen of the emerging Red Hat/Docker relationship in his April 15 blog on Forbes.com http://www.forbes.com/sites/benkepes/2014/04/15/red-hat-deepens-its-container-commitment-docker-front-and-center/  Too bad it's a holiday today in New Zealand. Otherwise, he'd be commenting here.

 
richwolski
50%
50%
richwolski,
User Rank: Apprentice
4/17/2014 | 6:09:14 PM
Old apples versus new oranges?
For cloud computing, VMs and containers are really two different technologies each with its own value proposition.  First -- Linux containers are really a name space solution to the problem of isolation.  Users are isolated from each other because they can't "name" (provide the address of) anything on the machine outside of their respective containers.  

The advantage of of this approach is that it is very efficient.  Creating a unique namespace for each user under Linux is a complex task to implement, but once implemented it requires little computation effort or memory/storage capability.

One disadvantage of this approach are that namespaces isolate access, but they don't isolate load.  Linux cgroups try to solve this problem, and perhaps one day they will, but containers share kernel resources (device drivers, memory management scheduling, etc.) in a way that can't be isolated.  

Another disadvantage is that they work for Linux only.  There isn't a notion of a Windows container running on Linux.    Clouds today need to support both Linux and Windows, to be sure.  In addition, language-specific environments like OSV (essentially Java running directly on a hypervisor) may offer new cloud hosting capabilities as they mature. Even the Linux distos have kernel preferences so running an arbitrary Linux image that has a kernel conflict with the host kernel can be a legacy problem with containers.

VMs take a different approach to isolation.  They don't share the kernel -- they share the devices that the kernel accessess.  Doing so provides name space isolation, a greater degree of load isolation, and the ability to run operating systems with different "kernels" on the same machines.  They also require more maintenance because each VM has its own kernel and (like all software) that additional software layer is not immune from care.

To me, Brian Stevens comments seem quite cogent.  Running containers inside VMs allows for both the management advantages that namespace isolation provides and the flexibility and performance isolation advantages that VMs provide.  The key will be to develop cloud infrastructure services that are rich enough to exploit both technologies.  
richwolski
50%
50%
richwolski,
User Rank: Apprentice
4/17/2014 | 7:10:58 PM
Re: What is old is new.
I'm not sure about containers, but the concept of Virtual Machine dates to about 1970 with IBM.

Meyer, Richard A., and Love H. Seawright. "A virtual machine time-sharing system." IBM Systems Journal 9.3 (1970): 199-218.

It is striking as to how similar the goals were with respect to the ability to run multiple operating systems or OS versions.  Seawright wrote a follow on in 1979 with Richard MacKinnon

Seawright, Love H., and Richard A. MacKinnon. "VM/370—a study of multiplicity and usefulness." IBM Systems Journal 18.1 (1979): 4-17.

That reads as being quite modern (at a high level) in its justification for virtualization.

However the power of the economics should not be under estimated here.  While conceptually machine virtualization is old, its value today derives from a completeley different place in the technology economy and that is most certainly new.
Page 1 / 4   >   >>
Google in the Enterprise Survey
Google in the Enterprise Survey
There's no doubt Google has made headway into businesses: Just 28 percent discourage or ban use of its productivity ­products, and 69 percent cite Google Apps' good or excellent ­mobility. But progress could still stall: 59 percent of nonusers ­distrust the security of Google's cloud. Its data privacy is an open question, and 37 percent worry about integration.
Register for InformationWeek Newsletters
White Papers
Current Issue
InformationWeek Tech Digest - September 17, 2014
It doesn't matter whether your e-commerce D-Day is Black Friday, tax day, or some random Thursday when a post goes viral. Your websites need to be ready.
Flash Poll
Video
Slideshows
Twitter Feed
InformationWeek Radio
Sponsored Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.