Cloud // Platform as a Service
Commentary
4/17/2014
12:44 PM
Charles Babcock
Charles Babcock
Commentary
Connect Directly
Twitter
RSS
E-Mail
100%
0%

Red Hat Linux Containers: Not Just Recycled Ideas

Red Hat and its partner, Docker, bring DevOps characteristics to Linux containers, making them lighter-weight vehicles than virtual machines for cloud workloads.

Some people accuse Red Hat of dusting off an old idea, Linux containers, and presenting them as if they were something new. Well, I would acknowledge Sun Microsystems offered containers under Solaris years ago and the concept isn't new. But Docker and Red Hat together have been able to bring new packaging attributes to containers, making them an alternative that's likely to exist alongside virtual machines for moving workloads into the cloud.

And containers promise to fit more seamlessly into a DevOps world than virtual machines do. Containers can provide an automated way for the components to receive patches and updates -- without a system administrator's intervention. A workload sent out to the cloud a month ago may have had the Heartbleed vulnerability. When the same workload is sent in a container today, it's been fixed, even though a system administrator did nothing to correct it. The update was supplied to an open-source code module by the party responsible for it, and the updated version was automatically retrieved and integrated into the workload as it was containerized.

[Want to learn more about Linux containers? See Red Hat Announces Linux App Container Certification.]

That's one reason why Paul Cormier, Red Hat's president of products and technologies, at the Red Hat Summit this week, called containers an emerging technology "that will drive the future." He didn't specifically mention workload security, rather, he cited the increased mobility a workload gains when it's packaged inside a container. In theory at least, a containerized application can be sent to different clouds, with the container interface navigating the differences. The container checks with the host server to make sure it's running the Linux kernel that the application needs. The rest of the operating system is resident in the container itself.

Is that really much of an advantage? Aren't CPUs powerful enough and networks big enough to move the whole operating system with the application, the way virtual machines do? VMware is betting heavily on the efficacy of moving an ESX Server workload from the enterprise to a like environment in the cloud, the vCloud Hybrid Service. No need to worry about which Linux kernel is on the cloud server. The virtual machine has a complete operating system included with it.

Paul Cormier at Red Hat Summit 2014. (Source: Red Hat)
Paul Cormier at Red Hat Summit 2014.
(Source: Red Hat)

But that's one of the points in favor of containers, in my opinion. Sun used to boast how many applications could run under one version of Solaris. In effect, all the containerized applications on a Linux cloud host are sharing the host's Linux kernel and providing the rest of the Linux user-mode libraries themselves. That makes each container a smaller-sized, less-demanding workload on the host and allows more workloads per host.

Determining how many workloads per host is an inexact science. It will depend on how much of the operating system each workload originator decided to include in the container. But if a disciplined approach was taken and only needed class libraries were included, then a host server that can run 10 large VMs would be able to handle 100 containerized applications of similar caliber, said Red Hat CTO Brian Stevens Wednesday in a keynote at the Red Hat Summit.

It's the 10X efficiency factor, if Stevens is correct, that's going to command attention among Linux developers, enterprise system administrators, and cloud service providers. Red Hat Enterprise Linux is already a frequent choice in the cloud. It's not necessarily the first choice for development, where Ubuntu, Debian, and Suse may be used as often as Red Hat. When it comes to running production systems, however, Red Hat rules.

Red Hat has produced a version of Red Hat Enterprise Linux, dubbed Atomic Host, geared specifically to run Linux containers. Do we need another version of RHEL? Will containers really catch on? Will Red Hat succeed in injecting vigor into its OpenShift platform for developers through this container expertise?

We shall see. But the idea of containers addresses several issues that virtualization could not solve by itself. In the future, containers may be a second way to move workloads into the cloud when certain operating characteristics are sought, such as speed of delivery to the cloud, speed of initiation, and concentration of workloads using the same kernel on one host.

Can the trendy tech strategy of DevOps really bring peace between developers and IT operations -- and deliver faster, more reliable app creation and delivery? Also in the DevOps Challenge issue of InformationWeek: Execs charting digital business strategies can't afford to take Internet connectivity for granted.

Charles Babcock is an editor-at-large for InformationWeek, having joined the publication in 2003. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive Week. He is a graduate of Syracuse ... View Full Bio

Comment  | 
Print  | 
More Insights
Comments
Threaded  |  Newest First  |  Oldest First
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
4/17/2014 | 1:12:51 PM
IDC software development analyst adds comments
IDC's Al Hilwa commented in an email message as this piece was posted:

"Red Hat sees Linux containers as the next big thing and an enabler for new cloud workloads... [Red Hat] is differentiating its OpenShift PaaS technology with the support of containers. This is a sound strategy because PaaS is in great need for standardized approaches to host workloads and sub-VM containers like Docker provide a standard mechanism for efficiently encapsulating an application and its libraries in a portable way. Red Hat is also producing a version of RHEL called Atomic which is specifically optimized to support containers in a lightweight fashion. In theory Docker and similar technologies enable density of workloads and thus more cost-efficient operation of cloud applications. This is a boon for hosters as the PaaS market expands because cloud economics is a key driver of cloud adoption.

"Containers are goodness for developers because of standardization and portability. Containers  are goodness for Red Hat, because it has embraced them  ahead of its competitors and because the bridge IaaS and PaaS capabilities in the same way that Red Hat has long positioned itself in the enterprise."
Laurianne
50%
50%
Laurianne,
User Rank: Author
4/17/2014 | 1:25:21 PM
Re: IDC software development analyst adds comments
Anywhere near a 10X efficiency factor compared to VMs would be significant. Great context on this news, Charlie.
rmerriam
50%
50%
rmerriam,
User Rank: Apprentice
4/17/2014 | 1:34:08 PM
Heat Savings
Reducing the number of servers is starting to be critical due to power and heat issues. A lighter footprint on a server means a reduction in how many servers are needed. 
Lorna Garey
50%
50%
Lorna Garey,
User Rank: Author
4/17/2014 | 2:14:36 PM
Only RHEL?
Charlie, Do you expect other network OSes to also support containers? 
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
4/17/2014 | 2:37:53 PM
Re: Only RHEL?
Lorna, I think it would have to be another server operating system, one with a big presence in the data center compared to Linux. That would be Windows. Microsoft may be compelled to consider containers, due to the advantage they bestow on Linux in the cloud era. But I don't think Windows can be adapted to containerization. That's something unique to the way Linux was built and the way the kernel works, along with open source packaging. Anyone else with thoughts on this?
neuroserve
50%
50%
neuroserve,
User Rank: Apprentice
6/14/2014 | 4:28:19 AM
Re: Only RHEL?
You can buy Virtuozzo containers for windows from Parallels for a long time.
ruvy
50%
50%
ruvy,
User Rank: Apprentice
4/17/2014 | 3:44:11 PM
What is old is new.
It's an interesting space, add a little devops magic to a 15 year old technology and voilà, you've got instant street cred. But the realities are a bit more complicated. Among the driving factors for the use of containization is the the bloat found within the very OS Red Hat sells. The old model of kernel space and user space are quickly becoming antiquated.

A quick refresher, Kernel space is strictly reserved for running privileged kernel, kernel extensions, and most device drivers or essentially the core OS. In contrast, user space is the memory area where application software and some drivers execute, where the magic happens. In the most basic terms, LXC is a userspace interface.

The logical question to ask is.. do we really need or want all that other stuff that comes included in the OS? Increasingly the answer is no. 
Andrew Binstock
50%
50%
Andrew Binstock,
User Rank: Author
4/17/2014 | 4:59:49 PM
Re: What is old is new.
Welil, if we're going to get into labeling new implementations as old wine in new bottles, let's give credit where credit is due: virtual containers, to my knowledge, first appeared commercially on IBM mainframes. However, the historical containers have little connection to today's implementations, because today's instances are tuned to an entirely different reality: cloud computing. More specifically, the ability to deploy and migrate VMs/containers quickly, which is a concept that simply did not exist on IBM or, IIRC, on Solaris servers.

So, yes, there are historical threads, but, no, they're not the same old thing just repackaged.
richwolski
50%
50%
richwolski,
User Rank: Apprentice
4/17/2014 | 7:10:58 PM
Re: What is old is new.
I'm not sure about containers, but the concept of Virtual Machine dates to about 1970 with IBM.

Meyer, Richard A., and Love H. Seawright. "A virtual machine time-sharing system." IBM Systems Journal 9.3 (1970): 199-218.

It is striking as to how similar the goals were with respect to the ability to run multiple operating systems or OS versions.  Seawright wrote a follow on in 1979 with Richard MacKinnon

Seawright, Love H., and Richard A. MacKinnon. "VM/370—a study of multiplicity and usefulness." IBM Systems Journal 18.1 (1979): 4-17.

That reads as being quite modern (at a high level) in its justification for virtualization.

However the power of the economics should not be under estimated here.  While conceptually machine virtualization is old, its value today derives from a completeley different place in the technology economy and that is most certainly new.
jfeldman
50%
50%
jfeldman,
User Rank: Strategist
4/18/2014 | 9:41:30 AM
Re: What is old is new.
EXACTLY the right question, "do we really need or want all that other stuff that comes included in the OS?"

And EXACTLY the right answer: NO.

Every time you deploy a VM, you need to actually think about "do I really want such-and-such service running?" "Do I want such-and-such library? And at what version?" On and on.

In the same way that security should be default-deny, services and libraries that just take up space and/or create attack surface or points of malfunction should be default-deny. There is so much junk that gets loaded on any modern OS. Reducing that complexity is good for troubleshooting and security.
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
4/17/2014 | 6:00:34 PM
A description of the emerging Docker and Red Hat relationship
Ben Kepes has done the best description I've seen of the emerging Red Hat/Docker relationship in his April 15 blog on Forbes.com http://www.forbes.com/sites/benkepes/2014/04/15/red-hat-deepens-its-container-commitment-docker-front-and-center/  Too bad it's a holiday today in New Zealand. Otherwise, he'd be commenting here.

 
richwolski
50%
50%
richwolski,
User Rank: Apprentice
4/17/2014 | 6:09:14 PM
Old apples versus new oranges?
For cloud computing, VMs and containers are really two different technologies each with its own value proposition.  First -- Linux containers are really a name space solution to the problem of isolation.  Users are isolated from each other because they can't "name" (provide the address of) anything on the machine outside of their respective containers.  

The advantage of of this approach is that it is very efficient.  Creating a unique namespace for each user under Linux is a complex task to implement, but once implemented it requires little computation effort or memory/storage capability.

One disadvantage of this approach are that namespaces isolate access, but they don't isolate load.  Linux cgroups try to solve this problem, and perhaps one day they will, but containers share kernel resources (device drivers, memory management scheduling, etc.) in a way that can't be isolated.  

Another disadvantage is that they work for Linux only.  There isn't a notion of a Windows container running on Linux.    Clouds today need to support both Linux and Windows, to be sure.  In addition, language-specific environments like OSV (essentially Java running directly on a hypervisor) may offer new cloud hosting capabilities as they mature. Even the Linux distos have kernel preferences so running an arbitrary Linux image that has a kernel conflict with the host kernel can be a legacy problem with containers.

VMs take a different approach to isolation.  They don't share the kernel -- they share the devices that the kernel accessess.  Doing so provides name space isolation, a greater degree of load isolation, and the ability to run operating systems with different "kernels" on the same machines.  They also require more maintenance because each VM has its own kernel and (like all software) that additional software layer is not immune from care.

To me, Brian Stevens comments seem quite cogent.  Running containers inside VMs allows for both the management advantages that namespace isolation provides and the flexibility and performance isolation advantages that VMs provide.  The key will be to develop cloud infrastructure services that are rich enough to exploit both technologies.  
jemison288
100%
0%
jemison288,
User Rank: Moderator
4/17/2014 | 9:41:22 PM
Too many features, not enough benefits
The cloud revolution has been driven by developers, not sysadmins.  In fact, if DevOps shows us anything, it's that the job of the sysadmin is changing significantly, and like many jobs, being eaten by software.  (What is the rise of cloud computing if not the turning of servers into software?)

In this paradigm, where exactly does the rise of containers fit? The fact that Red Hat is embracing it so heartily gives me the impression that Red Hat sees (a) the continued reliance upon VMs by the enterprise [at least for legacy applications], and (b) the failure of PaaS to really catch on. I can hear the internal strategy meetings: "We can take the general concept from VMs that everyone's still using, but make them more PaaS-y through containers!"

I think this strategy is fundamentally flawed for two reasons.  First, it's true that enterprises are still stuck with VMs, but that doesn't mean their developers (the drivers of the revolution) like VMs.  Most developers hate VMs, because they make developers reliant upon sysadmins.  The great thing about AWS is that it freed developers from sysadmins. Developers care about being freed.  Sysadmins may prefer VMs, but the traditional Sysadmin-administering-VMs (or containers) model is dying out fast. 

Second, containers only work in some cases.  What matters more than anything to developers is that they can get things up and running quickly and effectively. Most daemons/libraries/systems that developers want to put in place have instructions that assume traditional Linux systems and doing things like building from source and pulling in various libraries and daemons and setting them up through READMEs. Having to container-ize what you're building as a developer adds significantly complexity with essentially no benefit over using configuration management, which is much easier to understand and implement.

One main reason why PaaS hasn't caught on is because Chef/Puppet/CloudFormation have been so effective at automating the instantiation of applications on servers. Developers--the leaders of the cloud revolution--have very little incentive to pursue Docker and containers when they can get there from here with configuration management. It is only systems administrators and managers who have any incentive at all to pursue containers--will they have enough influence or even be around enough to make them mainstream?  I'm doubtful.
jfeldman
50%
50%
jfeldman,
User Rank: Strategist
4/18/2014 | 9:37:22 AM
Re: Too many features, not enough benefits
So, you're against reducing complexity? My experience with Chef/Puppet et al in the enterprise is that, YES, it adds a wonderful layer of automation, but ALSO, it adds significant complexity. When you write an app for PaaS, you don't worry about any of that stuff.

We all agree that IaaS is not (always) appropriate for legacy apps (it IS appropriate for dev/test/business continuity/DR of legacy apps, but that's a topic for another time). So the real question is, is it more appropriate to use PaaS or IaaS for "next generation" apps?

I have a hard time believing that MOST application developers give a rat's ass about infrastructure configuration. They just want to request resources and get them. If the platform takes care of all of the stuff that is necessary with IaaS (chef/puppet/rightscale whatever), then why on earth would most appdevs choose IaaS?
jemison288
50%
50%
jemison288,
User Rank: Moderator
4/18/2014 | 10:06:50 AM
Re: Too many features, not enough benefits
In theory, what you're saying is correct.  But in practice, there isn't any set of containers that work for most development use cases. VM-based development (e.g., Java, .Net) is best suited for containers (since you compile the application for the VM, and thus the application itself is essentially a container), but even then, you need to have all sorts of supplemental libraries/daemons/etc that need to be hooked up and configured in certain ways.  Containers like Docker--in the interest of making things simpler--hide/make it difficult to make the changes to configuration (at least for developers) that are necessary for proper functioning and performance.  You end up spending your time both trying to figure out how to configure the libraries/daemons and trying to figure out how to get that configuration properly into the damn container.

Your comment about platforms vs. IaaS is a bit confusing to me, as I think most developers who are choosing CloudFormation/OpsWorks/Chef/Puppet/RightScale would say they're choosing IaaS over PaaS.  Traditional PaaS (Heroku, CloudFoundry, OpenShift), as well as Docker, are much more heavy-handed and platform-y than those above configuration management choices.  There isn't exactly a black-and-white line between launching a VM via API and launching a VM and executing what amounts to a bash script via API.  But there's a very clear line when you require running within containers.


To say it a different way: Red Hat is optimizing for the sysadmin. Amazon is optimizing for the developer.  We know who wins here--it's the developer.  Software is eating the world.  You're right that developers don't care what technology they use.  They're going to pick the technology that gets them where they want to go the fastest, and that usually means whatever technology has good HOWTOs written up online that can be implemented at 4AM when the rest of the world is asleep.  And that technology, today, is configuration management, not PaaS or other containers.  And it's not clear to me how containers are going to win, since they're competing against PaaS and VMs, which aren't even in the running with developers.
jfeldman
50%
50%
jfeldman,
User Rank: Strategist
4/18/2014 | 10:12:45 AM
Re: Too many features, not enough benefits
To be clear, Heroku does call their dynos "containers." https://devcenter.heroku.com/articles/dynos
Laurianne
100%
0%
Laurianne,
User Rank: Author
4/18/2014 | 11:30:42 AM
Re: Too many features, not enough benefits
Thanks for weighing in, Joe and Jonathan. Sounds like we better explore this subject with further opinion columns.
TeaPartyCitizen
50%
50%
TeaPartyCitizen,
User Rank: Apprentice
4/18/2014 | 1:26:34 PM
Re: Too many features, not enough benefits
Using containers is the only way of putting Dev, QA, SIT and Production on the same box. There is no configuring of a machine. With Docker it is bare metal provisioning. One writes a script to provision the machine from bare metal. No hokus pokus. The company is no longer truck sinsitive. It's infrastructure as code. No heavy hypervisers, no loading of an OS and no redundant resources. Containers make life simpler for developers and DevOps. I could even be the differenciation which Linux needs.
jemison288
50%
50%
jemison288,
User Rank: Moderator
4/18/2014 | 1:43:27 PM
Re: Too many features, not enough benefits
This is classic "think like a sysadmin".  In the AWS world, I can have as many boxes as I want right now.  Why is the "box" a limiting factor?  If you're talking about "how many different environments on a physical machine", you're already losing to AWS.  The box doesn't matter.  What matters is enabling the developers.
TeaPartyCitizen
50%
50%
TeaPartyCitizen,
User Rank: Apprentice
4/18/2014 | 3:28:42 PM
Re: Too many features, not enough benefits
It costs money. Duh.
neuroserve
50%
50%
neuroserve,
User Rank: Apprentice
6/14/2014 | 4:42:41 AM
Re: Too many features, not enough benefits
"If you're talking about "how many different environments on a physical machine", you're already losing to AWS.  The box doesn't matter.  What matters is enabling the developers."

Why are these developers still using AWS then - when things like Jelastic are existing for quite a while?
joshuamckenty
50%
50%
joshuamckenty,
User Rank: Apprentice
4/18/2014 | 2:17:21 PM
UX Matters
I put together a slide deck on Docker for some of our investors and partners a few months back, and it's been useful in clarifying the confusion between containers (an infrastructure technology), Docker (a unified user experience around lightweight and introspected configuration management of containers), and various PaaS options (which deal with the myriad environment details of running multi-server applications, including scaling and upgrades).

http://www.slideshare.net/joshuamckenty/but-what-about-docker

People (whether they're so-called Developers or SysAdmins, an increasingly blurry line) don't use software - they use interfaces. And the Docker interfaces are beautiful. At least when you're getting started.

Whether Red Hat can, through any amount of either honest contribution or grandstanding, convince developers that the best way to consume either containers or PaaS (or VMs, for that matter) is to buy RHEL, remains to be seen.

To the best of my knowledge, no significant IT disruption has been succesfully commercialized by the legacy vendors it disrupted. This is why Red Hat is the major Linux vendor, instead of IBM or Microsoft. It's why most folks buy Hadoop from Hortonworks or Cloudera.

Is PaaS an important and transformative IT disruption? Absolutely.

Are containers likely to play a part in that story? I'd give it even odds.

Will a legacy OS vendor such as Red Hat or Microsoft become a dominant player in the PaaS space? History tells us that it's unlikely.
jemison288
50%
50%
jemison288,
User Rank: Moderator
4/18/2014 | 3:02:58 PM
Re: UX Matters
Josh-- I agree with what you're saying, but you do gloss over the fact that developers favor "working" over "beautiful interfaces".  The challenge with Docker and PaaS is that you end up having to spend a lot of time shoehorning configurations into them, and the "interface" becomes more of a blockade to functionality than an asset.  (For trivial applications, this is not an issue, but developers generally aren't working with trivial applications).
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
4/18/2014 | 3:35:31 PM
Containers, what they're good for, debated here
Piston founder, CTO Joshua McKenty summarizes pros, cons of Docker containers in a set of slides, liink below in his perceptive comments. Thanks for additional light shed by Jonathan Feldman, CIO of City of Ashville, N.C., and Joe Emison, CTO of BuildFax in discussion below. Joe points out drawbacks of containers, says they favor systems admin over developers. I still lean in, Joe. Thanks to Rich Wolski, founder, CTO, Eucalyptus, who got this debate going with points on strengths/weaknesses of virfual machines vs. containers. Rich gives tip of hat to remarks by CTO Brian Stevens at Red Hat Summit, concluded Thursday. Enomaly founder in Toronto, Reuven Cohen, not too impressed with containers; he's now top tech advocate, Citrix. Glad to see TeaPartyCitizen join in and don't forget Andrew Binstock, editor of Dr. Dobb's, all below. I know there are others who want to join this debate, please do so. Ah, Alex Freedland, CEO of Mirantis, did so. Thanks!
mthiele570
50%
50%
mthiele570,
User Rank: Apprentice
4/18/2014 | 5:48:50 PM
Great discussion with no obvious answer to RHat Container good or Container bad
When VMware first offered server virtualization the more advanced users among us were having some of these same debates (was VMware the company, was the tech real, was it not just recycled mainframe tech?). The reality is its too early to tell whether RHAT containers will be either the company or technology winner. However, what I will postulate is that this technology is an elegant midway technology to fill the gap between legacy virtualization and what's next (a better OS?). The fact is most virtualization (especially after all the extras are added) becomes to expensive, top heavy or both. I see the future of virtualized environments as more closely mirroring HPC environments, as its a natural progression from where most of us are today. In the HPC(ish) world the ability to use a lighter more cost effective solution like containers will likely have a very strong appeal to many. If I have to prognosticate further I'd say that the biggest risk to RHAT is more likely that an alternative h/w abstraction solution will arise before containers have gained a big enough foot hold to be considered the defacto solution a la VMware.
jemison288
100%
0%
jemison288,
User Rank: Moderator
4/19/2014 | 5:23:55 PM
Re: Great discussion with no obvious answer to RHat Container good or Container bad
Mark-- I think that, to some extent, your question has already been answered.  At least in the public cloud computing space, VMs have been the standard, and I don't see that changing. Outsourcing the problems around hardware virtualization to vendors like Amazon has been fine, and many organizations have been fine with the costs.

So in order for Red Hat to win here, we have to accept that the private cloud world is necessarily going to be different from the public cloud world in a really fundamental way--namely, that you'll have to deal with another abstraction layer of containers in order to get your workloads running on your private cloud.

This seems unlikely to me. I really doubt that we're going to have dramatically different deployment configurations for private vs. public cloud. After all, I think many organizations are going to be using both (public for dev/test agility, private for regulated workloads and better cost control for enterprises) and/or organizations are going to be hiring developers who have grown up in the public cloud.

Why do we think that Red Hat will be successful in imposing an unnecessary learning curve on developers? I don't think they will. Unless containers can become de facto in the public cloud world, the main abstraction layer will be the VM, and everyone will use tools that minimize time-to-develop for public and private clouds alike on VMs. (And to the TeaPartyCitizen who implies that somehow costs are cheaper with containers on a single box: all you need to do is look at developers' salaries to realize that you're being penny-wise and pound-foolish)
alex_freedland
50%
50%
alex_freedland,
User Rank: Apprentice
4/18/2014 | 8:40:10 PM
A view from OpenStack perspective
RedHat is looking to address a real problem: RHEL is unsuitable for fast moving cloud ecosystems. Its release cadence puts it years behind the essential new capabilities are introduced into the world of cloud. By introducing containers, RedHat is looking to create a smaller and faster moving core, while keeping its old OS for legacy apps.

While elegant and wise marketing move for RedHat itself (that will most definitely improve agility and performance of workloads), it does not solve the fundamental disconnect between the RedHat's desire to impose its operating system as a standard for OpenStack, and the community's need to have its own Linux kernel that is developing at the same pace as OpenStack itself.

 
ltucker57002
50%
50%
ltucker57002,
User Rank: Apprentice
4/18/2014 | 9:09:46 PM
The New New
Great post and interesting discussion by all.  

Eric Schmidt used to say in reference to Java:  "in software we often solve problems by creating a new layer of abstraction which then allows for innovation both above and below the line."  In Java, this was the JVM. We are seeing this happening in cloud computing, and get to ask the question - for cloud-based application development, what's the platform? 

It's clear that building on cloud services for compute, storage, networking, etc. accelerates application development, since  we don't have to worry about building to the underlying infrastructure.  Whether to build to a virtual machine, container, or PaaS model, however isn't quite so clear cut.  Coming from Sun, I saw many of the advantages of Solaris containers and then was somewhat surprised to see the the success of the virtual machine model at AWS. 

Lesson learned: sometimes having a familiar environment, even if it means bringing your own OS, may involve more work, but it's at least the work that you know.  Now that containers are coming back, our choices in cloud computing are expanding again.  Both models, including PaaS will survive, as they address somewhat different problems.  

The interesting shift will come as we continue to see the emergence of new models, platforms, and services to address the development and operational needs of large scale cloud-native web apps. 
rictelford
50%
50%
rictelford,
User Rank: Apprentice
4/21/2014 | 9:12:22 AM
Old vs New isn't the issue - it is Complexity vs Simplicity
It doesn't really matter if containers is a new idea - what is important is that it is bringing a concept to the cloud to help reduce complexity of workload migration.  I think this is just another example of the maturation of the cloud and the march towards flexible infrastructure-as-code.  To me it is sort of like going to a furnished apartment vs an unfurnished (stay with me on this analogy!)  An unfurnished apartment meets some set of your needs, but you need a moving van of stuff (the VM) to meet the rest of your needs.  A furnished apartment addresses more of your needs, so you just need a few boxes and suitcases (the container).  Both have a purpose, but now you have choice and flexibility...
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
4/21/2014 | 12:43:43 PM
RE: Old vs New isn't the issue - it is Complexity vs Simplicity
I like IBM's Ric Telford's analogy. From my point of view, if you want to be light on your feet and free to move at short notice, then furnished apartments -- containers -- are for you. Also, Lew Tucker's telling comment, "sometimes having a familiar environment, even if it means bringing your own OS, may involve more work, but it's at least the work that you know," neatly sums up several pro-virtualization arguments, but he's not trying to decide the issue for the virtual machine status quo. He was quite familiar with containers as head of cloud computing for Sun, prior to becoming CTO of cloud for Cisco, and he concludes there's probably containers somewhere in the cloud's future. Well said. And Joe Emison at BuildFax continues to burn through pro-container comments with his own insight and doubts about their usefulness. Thanks, Joe. You've all made this the best informed debate I've seen on the subject.    

 
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
4/21/2014 | 1:10:26 PM
What are possibilities of Theile's "alternative h/w abstraction?"
Can someone tell me what SuperNAP Exec. VP of Data Center Technology Mark Thiele means when he says below, "The biggest risk to RHAT is more likely that an alternative h/w abstraction solution will arise before containers have gained a big enough foot hold to be considered the defacto solution..." What form would such hardware take? A supersized CPU, many cores, with an embedded, skeleton operating system that automatically treated every application loaded on it as a container?

 
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
4/21/2014 | 3:27:22 PM
Altoros CEO says containers have a future -- and future difficulties
Renat Khasanshryn, CEO of Altoros, in Sunnyvale, Calif., and Minsk, Belarus, sent these comments, which I'm adding them to the stream. Altoros, among other things, consults on Cloud Foundry use, so Renat pays attention to PaaS. He wrote:

"In 10 years, containers, not VMs, will sit atop of the Host OS for 50%+ of cloud-native workloads. Docker is the most "trendy", but not the most advanced container technology out there. First of all, many thanks to Red Hat and Docker for a great job popularizing containers among developers. Developers rule in this world, and will get what they want (containers).
  1. <Will Red Hat succeed in injecting vigor into its OpenShift platform for developers through this container expertise?> Answer: The momentum surrounding Cloud Foundry will likely result in domination of the open PaaS category. I believe OpenShift will either join Cloud Foundry or will be taking advantage of any weakness in the Cloud Foundry ecosystem to carve out a space of its own. Stay tuned in the next two weeks for a blog post about why OpenShift should now join Cloud Foundry Foundation, against prediction of some industry pundits.
  2. <Do we need another version of RHEL?> Answer: Not really. Instead, we need a solid cloud-native operating system. We need an operating system that is light-weight, and without the junk that comes with a "one-thing-fits-all" design. CoreOS, OSv give us a great peak of what a cloud-native OS will offer to the end users.
  3. < Will containers really catch on?> Answer: I am very bullish on the future of containers and believe that containers, not VMs/hypervisors, will dominate cloud-native workloads in as little as 10 years from now.

By 2007, VMware lost the battle for the title of #1 provider of multi-tenancy solution for the hosting market. A few little known companies emerged as winners. These few companies hold the most advanced container tech out there. 

Between 2000 and until today, container packaging technologies, including Docker, are facing enormous problems getting changes accepted by the upstream Linux kernel communities.


Container's future is bright; however, not without challenges

I believe containers will ultimately win over Type 2 hypervisors for cloud-native workloads, however they face some stormy winds:
    1. Short-term, adoption of containers-based products, including OpenShift and Cloud Foundry, will continue to suffer from Big 3 public cloud players having no incentives to replace a combination of hypervisor + host OS, with containers.
    2. Today's cloud leaders pursue lock-in strategies by taking advantage of the lack of portability for VM-native workloads. Container-based workloads and PaaS only speed up the race to the bottom of cloud pricing, while not providing any decent amounts of lock-in for cloud providers. 
       
krishsubramanian
100%
0%
krishsubramanian,
User Rank: Strategist
4/22/2014 | 5:47:00 PM
Misconceptions about containers
Disclaimer: I am a Red Hat employee but this comment is based on my understanding of the technology and industry. It cannot be construed as a statement from my employer.

There seems to be lot of misconceptions about containers. Leaving aside FUD about Red Hat joining CloudFoundry, I want to address some real points about containers.

1) Red Hat never said containers are new. In fact, it is very well known that containers are there for a long time. Even Red Hat has been using containers inside OpenShift for a long time. What is different now is that Red Hat is embracing Docker for its containers than doing something on its own. Why reinvent wheel when Red Hat can work with Docker to do containers right? Also, unlike others in the industry, Red Hat is not new to Open Source. OSS is part of Red Hat's DNA. That is why they joined OpenStack Foundation and contributed large number of resources for the project and doing similar thing with Docker. 

2) Charlie has already explained well about containers. I don't want to do the same here. In short, containers are about efficiencies and scale. It is not just developer efficiency but ops efficiency.

3) The innovation Red Hat has been highlighting is not about Docker per se but about how to make containers awere of other containers and how orchestration can be done more effectively. It is new and Red Hat has done a great job of bringing it to OSS community.

4) There seems to be some complete misunderstanding of underlying technology in one of the comments. A comment dismisses Project Atomic on one hand as unnecessary and praises CoreOS as revolutionary. Both CoreOS and Project Atomic has been doing similar things. Acting as an underlying host for containers. CoreOS is doing a great job and Project Atomic is also doing similar thing starting with a well trusted Red Hat Enterprise Linux as the starting point. Calling it as striped down version of RHEL is wrong.
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
4/23/2014 | 6:29:15 PM
Red Hat puts in last word; stay tuned
Thanks to Krishnan Subramanian, director of OpenShift strategy at Red Hat, for his comment, which appears for the time being to close out this debate over the value of containers and their future. Stay tuned. This is sure to be a topic of furture discussion.
Google in the Enterprise Survey
Google in the Enterprise Survey
There's no doubt Google has made headway into businesses: Just 28 percent discourage or ban use of its productivity ­products, and 69 percent cite Google Apps' good or excellent ­mobility. But progress could still stall: 59 percent of nonusers ­distrust the security of Google's cloud. Its data privacy is an open question, and 37 percent worry about integration.
Register for InformationWeek Newsletters
White Papers
Current Issue
InformationWeek Tech Digest - September 10, 2014
A high-scale relational database? NoSQL database? Hadoop? Event-processing technology? When it comes to big data, one size doesn't fit all. Here's how to decide.
Flash Poll
Video
Slideshows
Twitter Feed
InformationWeek Radio
Sponsored Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.