Cloud // Platform as a Service
Commentary
4/17/2014
12:44 PM
Charles Babcock
Charles Babcock
Commentary
Connect Directly
Twitter
RSS
E-Mail
100%
0%

Red Hat Linux Containers: Not Just Recycled Ideas

Red Hat and its partner, Docker, bring DevOps characteristics to Linux containers, making them lighter-weight vehicles than virtual machines for cloud workloads.

Some people accuse Red Hat of dusting off an old idea, Linux containers, and presenting them as if they were something new. Well, I would acknowledge Sun Microsystems offered containers under Solaris years ago and the concept isn't new. But Docker and Red Hat together have been able to bring new packaging attributes to containers, making them an alternative that's likely to exist alongside virtual machines for moving workloads into the cloud.

And containers promise to fit more seamlessly into a DevOps world than virtual machines do. Containers can provide an automated way for the components to receive patches and updates -- without a system administrator's intervention. A workload sent out to the cloud a month ago may have had the Heartbleed vulnerability. When the same workload is sent in a container today, it's been fixed, even though a system administrator did nothing to correct it. The update was supplied to an open-source code module by the party responsible for it, and the updated version was automatically retrieved and integrated into the workload as it was containerized.

[Want to learn more about Linux containers? See Red Hat Announces Linux App Container Certification.]

That's one reason why Paul Cormier, Red Hat's president of products and technologies, at the Red Hat Summit this week, called containers an emerging technology "that will drive the future." He didn't specifically mention workload security, rather, he cited the increased mobility a workload gains when it's packaged inside a container. In theory at least, a containerized application can be sent to different clouds, with the container interface navigating the differences. The container checks with the host server to make sure it's running the Linux kernel that the application needs. The rest of the operating system is resident in the container itself.

Is that really much of an advantage? Aren't CPUs powerful enough and networks big enough to move the whole operating system with the application, the way virtual machines do? VMware is betting heavily on the efficacy of moving an ESX Server workload from the enterprise to a like environment in the cloud, the vCloud Hybrid Service. No need to worry about which Linux kernel is on the cloud server. The virtual machine has a complete operating system included with it.

Paul Cormier at Red Hat Summit 2014. (Source: Red Hat)
Paul Cormier at Red Hat Summit 2014.
(Source: Red Hat)

But that's one of the points in favor of containers, in my opinion. Sun used to boast how many applications could run under one version of Solaris. In effect, all the containerized applications on a Linux cloud host are sharing the host's Linux kernel and providing the rest of the Linux user-mode libraries themselves. That makes each container a smaller-sized, less-demanding workload on the host and allows more workloads per host.

Determining how many workloads per host is an inexact science. It will depend on how much of the operating system each workload originator decided to include in the container. But if a disciplined approach was taken and only needed class libraries were included, then a host server that can run 10 large VMs would be able to handle 100 containerized applications of similar caliber, said Red Hat CTO Brian Stevens Wednesday in a keynote at the Red Hat Summit.

It's the 10X efficiency factor, if Stevens is correct, that's going to command attention among Linux developers, enterprise system administrators, and cloud service providers. Red Hat Enterprise Linux is already a frequent choice in the cloud. It's not necessarily the first choice for development, where Ubuntu, Debian, and Suse may be used as often as Red Hat. When it comes to running production systems, however, Red Hat rules.

Red Hat has produced a version of Red Hat Enterprise Linux, dubbed Atomic Host, geared specifically to run Linux containers. Do we need another version of RHEL? Will containers really catch on? Will Red Hat succeed in injecting vigor into its OpenShift platform for developers through this container expertise?

We shall see. But the idea of containers addresses several issues that virtualization could not solve by itself. In the future, containers may be a second way to move workloads into the cloud when certain operating characteristics are sought, such as speed of delivery to the cloud, speed of initiation, and concentration of workloads using the same kernel on one host.

Can the trendy tech strategy of DevOps really bring peace between developers and IT operations -- and deliver faster, more reliable app creation and delivery? Also in the DevOps Challenge issue of InformationWeek: Execs charting digital business strategies can't afford to take Internet connectivity for granted.

Charles Babcock is an editor-at-large for InformationWeek, having joined the publication in 2003. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive Week. He is a graduate of Syracuse ... View Full Bio

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
<<   <   Page 3 / 4   >   >>
jemison288
50%
50%
jemison288,
User Rank: Moderator
4/18/2014 | 10:06:50 AM
Re: Too many features, not enough benefits
In theory, what you're saying is correct.  But in practice, there isn't any set of containers that work for most development use cases. VM-based development (e.g., Java, .Net) is best suited for containers (since you compile the application for the VM, and thus the application itself is essentially a container), but even then, you need to have all sorts of supplemental libraries/daemons/etc that need to be hooked up and configured in certain ways.  Containers like Docker--in the interest of making things simpler--hide/make it difficult to make the changes to configuration (at least for developers) that are necessary for proper functioning and performance.  You end up spending your time both trying to figure out how to configure the libraries/daemons and trying to figure out how to get that configuration properly into the damn container.

Your comment about platforms vs. IaaS is a bit confusing to me, as I think most developers who are choosing CloudFormation/OpsWorks/Chef/Puppet/RightScale would say they're choosing IaaS over PaaS.  Traditional PaaS (Heroku, CloudFoundry, OpenShift), as well as Docker, are much more heavy-handed and platform-y than those above configuration management choices.  There isn't exactly a black-and-white line between launching a VM via API and launching a VM and executing what amounts to a bash script via API.  But there's a very clear line when you require running within containers.


To say it a different way: Red Hat is optimizing for the sysadmin. Amazon is optimizing for the developer.  We know who wins here--it's the developer.  Software is eating the world.  You're right that developers don't care what technology they use.  They're going to pick the technology that gets them where they want to go the fastest, and that usually means whatever technology has good HOWTOs written up online that can be implemented at 4AM when the rest of the world is asleep.  And that technology, today, is configuration management, not PaaS or other containers.  And it's not clear to me how containers are going to win, since they're competing against PaaS and VMs, which aren't even in the running with developers.
jfeldman
50%
50%
jfeldman,
User Rank: Strategist
4/18/2014 | 9:41:30 AM
Re: What is old is new.
EXACTLY the right question, "do we really need or want all that other stuff that comes included in the OS?"

And EXACTLY the right answer: NO.

Every time you deploy a VM, you need to actually think about "do I really want such-and-such service running?" "Do I want such-and-such library? And at what version?" On and on.

In the same way that security should be default-deny, services and libraries that just take up space and/or create attack surface or points of malfunction should be default-deny. There is so much junk that gets loaded on any modern OS. Reducing that complexity is good for troubleshooting and security.
jfeldman
50%
50%
jfeldman,
User Rank: Strategist
4/18/2014 | 9:37:22 AM
Re: Too many features, not enough benefits
So, you're against reducing complexity? My experience with Chef/Puppet et al in the enterprise is that, YES, it adds a wonderful layer of automation, but ALSO, it adds significant complexity. When you write an app for PaaS, you don't worry about any of that stuff.

We all agree that IaaS is not (always) appropriate for legacy apps (it IS appropriate for dev/test/business continuity/DR of legacy apps, but that's a topic for another time). So the real question is, is it more appropriate to use PaaS or IaaS for "next generation" apps?

I have a hard time believing that MOST application developers give a rat's ass about infrastructure configuration. They just want to request resources and get them. If the platform takes care of all of the stuff that is necessary with IaaS (chef/puppet/rightscale whatever), then why on earth would most appdevs choose IaaS?
jemison288
100%
0%
jemison288,
User Rank: Moderator
4/17/2014 | 9:41:22 PM
Too many features, not enough benefits
The cloud revolution has been driven by developers, not sysadmins.  In fact, if DevOps shows us anything, it's that the job of the sysadmin is changing significantly, and like many jobs, being eaten by software.  (What is the rise of cloud computing if not the turning of servers into software?)

In this paradigm, where exactly does the rise of containers fit? The fact that Red Hat is embracing it so heartily gives me the impression that Red Hat sees (a) the continued reliance upon VMs by the enterprise [at least for legacy applications], and (b) the failure of PaaS to really catch on. I can hear the internal strategy meetings: "We can take the general concept from VMs that everyone's still using, but make them more PaaS-y through containers!"

I think this strategy is fundamentally flawed for two reasons.  First, it's true that enterprises are still stuck with VMs, but that doesn't mean their developers (the drivers of the revolution) like VMs.  Most developers hate VMs, because they make developers reliant upon sysadmins.  The great thing about AWS is that it freed developers from sysadmins. Developers care about being freed.  Sysadmins may prefer VMs, but the traditional Sysadmin-administering-VMs (or containers) model is dying out fast. 

Second, containers only work in some cases.  What matters more than anything to developers is that they can get things up and running quickly and effectively. Most daemons/libraries/systems that developers want to put in place have instructions that assume traditional Linux systems and doing things like building from source and pulling in various libraries and daemons and setting them up through READMEs. Having to container-ize what you're building as a developer adds significantly complexity with essentially no benefit over using configuration management, which is much easier to understand and implement.

One main reason why PaaS hasn't caught on is because Chef/Puppet/CloudFormation have been so effective at automating the instantiation of applications on servers. Developers--the leaders of the cloud revolution--have very little incentive to pursue Docker and containers when they can get there from here with configuration management. It is only systems administrators and managers who have any incentive at all to pursue containers--will they have enough influence or even be around enough to make them mainstream?  I'm doubtful.
richwolski
50%
50%
richwolski,
User Rank: Apprentice
4/17/2014 | 7:10:58 PM
Re: What is old is new.
I'm not sure about containers, but the concept of Virtual Machine dates to about 1970 with IBM.

Meyer, Richard A., and Love H. Seawright. "A virtual machine time-sharing system." IBM Systems Journal 9.3 (1970): 199-218.

It is striking as to how similar the goals were with respect to the ability to run multiple operating systems or OS versions.  Seawright wrote a follow on in 1979 with Richard MacKinnon

Seawright, Love H., and Richard A. MacKinnon. "VM/370—a study of multiplicity and usefulness." IBM Systems Journal 18.1 (1979): 4-17.

That reads as being quite modern (at a high level) in its justification for virtualization.

However the power of the economics should not be under estimated here.  While conceptually machine virtualization is old, its value today derives from a completeley different place in the technology economy and that is most certainly new.
richwolski
50%
50%
richwolski,
User Rank: Apprentice
4/17/2014 | 6:09:14 PM
Old apples versus new oranges?
For cloud computing, VMs and containers are really two different technologies each with its own value proposition.  First -- Linux containers are really a name space solution to the problem of isolation.  Users are isolated from each other because they can't "name" (provide the address of) anything on the machine outside of their respective containers.  

The advantage of of this approach is that it is very efficient.  Creating a unique namespace for each user under Linux is a complex task to implement, but once implemented it requires little computation effort or memory/storage capability.

One disadvantage of this approach are that namespaces isolate access, but they don't isolate load.  Linux cgroups try to solve this problem, and perhaps one day they will, but containers share kernel resources (device drivers, memory management scheduling, etc.) in a way that can't be isolated.  

Another disadvantage is that they work for Linux only.  There isn't a notion of a Windows container running on Linux.    Clouds today need to support both Linux and Windows, to be sure.  In addition, language-specific environments like OSV (essentially Java running directly on a hypervisor) may offer new cloud hosting capabilities as they mature. Even the Linux distos have kernel preferences so running an arbitrary Linux image that has a kernel conflict with the host kernel can be a legacy problem with containers.

VMs take a different approach to isolation.  They don't share the kernel -- they share the devices that the kernel accessess.  Doing so provides name space isolation, a greater degree of load isolation, and the ability to run operating systems with different "kernels" on the same machines.  They also require more maintenance because each VM has its own kernel and (like all software) that additional software layer is not immune from care.

To me, Brian Stevens comments seem quite cogent.  Running containers inside VMs allows for both the management advantages that namespace isolation provides and the flexibility and performance isolation advantages that VMs provide.  The key will be to develop cloud infrastructure services that are rich enough to exploit both technologies.  
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
4/17/2014 | 6:00:34 PM
A description of the emerging Docker and Red Hat relationship
Ben Kepes has done the best description I've seen of the emerging Red Hat/Docker relationship in his April 15 blog on Forbes.com http://www.forbes.com/sites/benkepes/2014/04/15/red-hat-deepens-its-container-commitment-docker-front-and-center/  Too bad it's a holiday today in New Zealand. Otherwise, he'd be commenting here.

 
Andrew Binstock
50%
50%
Andrew Binstock,
User Rank: Author
4/17/2014 | 4:59:49 PM
Re: What is old is new.
Welil, if we're going to get into labeling new implementations as old wine in new bottles, let's give credit where credit is due: virtual containers, to my knowledge, first appeared commercially on IBM mainframes. However, the historical containers have little connection to today's implementations, because today's instances are tuned to an entirely different reality: cloud computing. More specifically, the ability to deploy and migrate VMs/containers quickly, which is a concept that simply did not exist on IBM or, IIRC, on Solaris servers.

So, yes, there are historical threads, but, no, they're not the same old thing just repackaged.
ruvy
50%
50%
ruvy,
User Rank: Apprentice
4/17/2014 | 3:44:11 PM
What is old is new.
It's an interesting space, add a little devops magic to a 15 year old technology and voilà, you've got instant street cred. But the realities are a bit more complicated. Among the driving factors for the use of containization is the the bloat found within the very OS Red Hat sells. The old model of kernel space and user space are quickly becoming antiquated.

A quick refresher, Kernel space is strictly reserved for running privileged kernel, kernel extensions, and most device drivers or essentially the core OS. In contrast, user space is the memory area where application software and some drivers execute, where the magic happens. In the most basic terms, LXC is a userspace interface.

The logical question to ask is.. do we really need or want all that other stuff that comes included in the OS? Increasingly the answer is no. 
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
4/17/2014 | 2:37:53 PM
Re: Only RHEL?
Lorna, I think it would have to be another server operating system, one with a big presence in the data center compared to Linux. That would be Windows. Microsoft may be compelled to consider containers, due to the advantage they bestow on Linux in the cloud era. But I don't think Windows can be adapted to containerization. That's something unique to the way Linux was built and the way the kernel works, along with open source packaging. Anyone else with thoughts on this?
<<   <   Page 3 / 4   >   >>
Google in the Enterprise Survey
Google in the Enterprise Survey
There's no doubt Google has made headway into businesses: Just 28 percent discourage or ban use of its productivity ­products, and 69 percent cite Google Apps' good or excellent ­mobility. But progress could still stall: 59 percent of nonusers ­distrust the security of Google's cloud. Its data privacy is an open question, and 37 percent worry about integration.
Register for InformationWeek Newsletters
White Papers
Current Issue
InformationWeek Tech Digest September 24, 2014
Start improving branch office support by tapping public and private cloud resources to boost performance, increase worker productivity, and cut costs.
Video
Slideshows
Twitter Feed
InformationWeek Radio
Sponsored Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.