Docker 1.0 Backed By IBM, Red Hat, Rackspace - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Cloud // Platform as a Service
10:50 AM
Connect Directly

Docker 1.0 Backed By IBM, Red Hat, Rackspace

Industry heavyweights line up at inaugural DockerCon user conference to support Docker as the de facto standard for Linux containers.

Docker, the company that sponsors the open source project, is gaining allies in making its commercially supported Linux container format a de facto standard. Linux containers are a way of packaging up applications and related software for movement over the network or Internet. Once at their destination, they launch in a standard way and enable multiple containers to run under a single host operating system.

Sun originally pioneered the concept with Solaris Containers. The Linux community has broadened the concept through the Docker project, which was launched with 15 contributors in March 2013 and now is available in its 1.0 version, with 460 developers as contributors.

"We need Docker's capabilities to power the Web," said John Engates, CTO of Rackspace, Monday at DockerCon14, the first developer conference for Docker. In the future, "a planet-scale cloud will be ubiquitous, and it will be easy to move from one cloud to another" using Docker, he predicted.

"I've never seen a community coalesce so fast," said Docker CEO Ben Golub.

[Want to learn more about Linux containers? See Red Hat Touts Linux Containers For Cloud.]

Both Google and IBM will send engineering representatives to the keynote podium Tuesday to describe why Docker is a sound way to move and maintain workloads that contain multiple complex and related parts.

Monday, Boden Russell, an advisory software engineer with IBM's Global Technology Services, revealed benchmarking that showed it is quicker to launch workloads in containers than in virtual machines. Containers require less memory and CPU at launch, according to Russell's statistics. Once running, however, they tend to use about the same amount of CPU and memory for a process like MySQL online transaction processing, he said.

In addition, both Red Hat and Rackspace announced they are backing Docker as their choice for a container system that works with their products. Engates was invited to the podium early Monday to talk about how the Rackspace Cloud will include a pre-installed copy of Docker for customers to use if they choose. "Without that support, the first thing a customer would have to do is install Docker himself. This way, he'll just have to push a few buttons get Docker running" and start building a workload to be deployed in Rackspace infrastructure, he said in an interview before the start of the conference.

Red Hat was singled out as an early partner by Docker's Golub. "I can't thank Red Hat enough. In many ways they stand alone" as an early believer in the value of Docker as open source code. Red Hat is responsible for 289 code commits in the 1.0 version, he said.

Ubuntu, Debian, and CentOS are also supporting the Docker format, but Red Hat is using Docker as the cornerstone of a project inside OpenStack -- Project Solom -- as a way to build, test, migrate, and deploy workloads to the OpenStack cloud, such as HP's or Rackspace's, or clouds built by its own customers. Red Hat executive VP and CTO Brian Stevens said Red Hat is working on its Atomic version of Red Hat Enterprise Linux, which will be optimized to run a Docker system and Docker containers.

It is also working on tooling, called Cockpit, to make it easier to assemble applications and instrument how well they're running using Docker, Stevens said. It has founded the GearD project, as well, to produce a command-line client that links Docker containers on different hosts and ties them into a single system manager.

"Atomic and Cockpit are built for a world of Docker apps," a development that Red Hat is betting on. GearD "knows how to take this (containerized) code and spin up three services for it," he said.

Another speaker at DockerCon Tuesday will be Eric Brewer, VP of infrastructure at Google, which also uses Docker containers. "Google and Docker are a very natural fit," said Brewer, a kind of uber-engineer at Google. "We both have the same vision of how applications should be built," Brewer told Wired in an interview published Monday.

Monday's DockerCon offered an overview of things that Linux containers are -- and are not:

• A container provides a way to assemble an application composed of different parts in layers. The layers can be moved around as a unit, but any single layer may be manually or automatically updated without disturbing the other layers. Linux containers move applications a step toward becoming self-maintaining, rather than requiring IT staff to do it.

• Containers are standardized, lighter-weight ways to provide isolation on a server compared to virtual machines, when several applications are running on the same server. Containers share one operating system. Each virtual machine needs to be equipped with its own.

• Like a shipping container, a Linux container is a way to package a set of related files that make up a workload and move them to a remote location. At the new location, the only compatibility required is a server running the correct version of the Linux kernel.

• The layers of a container workload are sequenced so that they launch in the right order upon deployment. Containers also make it predictable to know how connections to the network, database server, and other resources are made, provided the remote host recognizes container formatting.

• Containers are also considered "a non-compulsory step toward DevOps," as one speaker phrased it at DockerCon. The standards and disciplines they impose make it easier to create applications without worrying about the specific environment in which they're going to run. Once in that environment, some elements of their build, testing, staging, and deployment can be done automatically.

• Containers are not a replacement for virtualization. IT managers may choose containers over virtualization, but the virtualized workload, with its complete copy of the operating system, is a more discrete unit. It can be moved while running around the data center and doesn't need a host with exactly the right Linux kernel.

• In some cases, containerized workloads may be deemed suitable for some applications and virtual machines for others, depending on operational circumstances. Right now, a software-defined data center will be based on virtual machines. But Stevens said Red Hat would continue to work to bring more automation and system management to containers.

Can the trendy tech strategy of DevOps really bring peace between developers and IT operations -- and deliver faster, more reliable app creation and delivery? Also in the DevOps Challenge issue of InformationWeek: Execs charting digital business strategies can't afford to take Internet connectivity for granted.

Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive ... View Full Bio

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
Charlie Babcock
Charlie Babcock,
User Rank: Author
12/9/2014 | 7:06:13 PM
Ah, FreeBSD pioneered containers before Sun
To be clear Sun Microsystems pioneered the concept of containers in Solaris before Docker, and FreeBSD pioneered zones and containers in FreeBSD Unix before Sun.
Charlie Babcock
Charlie Babcock,
User Rank: Author
6/10/2014 | 4:45:45 PM
Containers "scoot" like virtual machines?
Lorna, Over at VMware, they're groaning at the notion that containers can "scoot between" servers the way virtual machines are "live migrated" between physical hosts. It  may be possible someday, but for now the systems management of virtual machines is far ahead of containers. Containers can be deactivated, moved over the wire and restarted, but that doesn't sound much like vMotioning a currently running VM. Both, however, can be moved from one destination to another as a set of files. VMs are non-discriminating; practically any host with the right hypervisor will do. Containers need to move between similar Linux kernels. It seems likely to me that containers will suggest new forms of workload movement and data center automation. It's just very early in deciding how to do it. 
Lorna Garey
Lorna Garey,
User Rank: Author
6/10/2014 | 4:19:39 PM
That's a great explanation, that a container is tied to an external OS while VMs contain the OS. Are there any other dependencies that containers have? Or, as long as Server A and Server B both have the same version of RHEL, a container can scoot between them regardless of other factors?
Thomas Claburn
Thomas Claburn,
User Rank: Author
6/10/2014 | 4:14:33 PM
a way to mitigate lock-in?
Do Docker containers make it easier to avoid lock-in with specific cloud vendors? Or is there something else that limits app portability between providers?
InformationWeek Is Getting an Upgrade!

Find out more about our plans to improve the look, functionality, and performance of the InformationWeek site in the coming months.

How CIO Roles Will Change: The Future of Work
Jessica Davis, Senior Editor, Enterprise Apps,  7/1/2021
A Strategy to Aid Underserved Communities and Fill Tech Jobs
Joao-Pierre S. Ruth, Senior Writer,  7/9/2021
10 Ways AI and ML Are Evolving
Lisa Morgan, Freelance Writer,  6/28/2021
White Papers
Register for InformationWeek Newsletters
Current Issue
Flash Poll