Industry heavyweights line up at inaugural DockerCon user conference to support Docker as the de facto standard for Linux containers.
Docker, the company that sponsors the Docker.org open source project, is gaining allies in making its commercially supported Linux container format a de facto standard. Linux containers are a way of packaging up applications and related software for movement over the network or Internet. Once at their destination, they launch in a standard way and enable multiple containers to run under a single host operating system.
Sun originally pioneered the concept with Solaris Containers. The Linux community has broadened the concept through the Docker project, which was launched with 15 contributors in March 2013 and now is available in its 1.0 version, with 460 developers as contributors.
"We need Docker's capabilities to power the Web," said John Engates, CTO of Rackspace, Monday at DockerCon14, the first developer conference for Docker. In the future, "a planet-scale cloud will be ubiquitous, and it will be easy to move from one cloud to another" using Docker, he predicted.
"I've never seen a community coalesce so fast," said Docker CEO Ben Golub.
Both Google and IBM will send engineering representatives to the keynote podium Tuesday to describe why Docker is a sound way to move and maintain workloads that contain multiple complex and related parts.
Monday, Boden Russell, an advisory software engineer with IBM's Global Technology Services, revealed benchmarking that showed it is quicker to launch workloads in containers than in virtual machines. Containers require less memory and CPU at launch, according to Russell's statistics. Once running, however, they tend to use about the same amount of CPU and memory for a process like MySQL online transaction processing, he said.
In addition, both Red Hat and Rackspace announced they are backing Docker as their choice for a container system that works with their products. Engates was invited to the podium early Monday to talk about how the Rackspace Cloud will include a pre-installed copy of Docker for customers to use if they choose. "Without that support, the first thing a customer would have to do is install Docker himself. This way, he'll just have to push a few buttons get Docker running" and start building a workload to be deployed in Rackspace infrastructure, he said in an interview before the start of the conference.
Red Hat was singled out as an early partner by Docker's Golub. "I can't thank Red Hat enough. In many ways they stand alone" as an early believer in the value of Docker as open source code. Red Hat is responsible for 289 code commits in the 1.0 version, he said.
Ubuntu, Debian, and CentOS are also supporting the Docker format, but Red Hat is using Docker as the cornerstone of a project inside OpenStack -- Project Solom -- as a way to build, test, migrate, and deploy workloads to the OpenStack cloud, such as HP's or Rackspace's, or clouds built by its own customers. Red Hat executive VP and CTO Brian Stevens said Red Hat is working on its Atomic version of Red Hat Enterprise Linux, which will be optimized to run a Docker system and Docker containers.
It is also working on tooling, called Cockpit, to make it easier to assemble applications and instrument how well they're running using Docker, Stevens said. It has founded the GearD project, as well, to produce a command-line client that links Docker containers on different hosts and ties them into a single system manager.
"Atomic and Cockpit are built for a world of Docker apps," a development that Red Hat is betting on. GearD "knows how to take this (containerized) code and spin up three services for it," he said.
Another speaker at DockerCon Tuesday will be Eric Brewer, VP of infrastructure at Google, which also uses Docker containers. "Google and Docker are a very natural fit," said Brewer, a kind of uber-engineer at Google. "We both have the same vision of how applications should be built," Brewer told Wired in an interview published Monday.
Monday's DockerCon offered an overview of things that Linux containers are -- and are not:
• A container provides a way to assemble an application composed of different parts in layers. The layers can be moved around as a unit, but any single layer may be manually or automatically updated without disturbing the other layers. Linux containers move applications a step toward becoming self-maintaining, rather than requiring IT staff to do it.
• Containers are standardized, lighter-weight ways to provide isolation on a server compared to virtual machines, when several applications are running on the same server. Containers share one operating system. Each virtual machine needs to be equipped with its own.
• Like a shipping container, a Linux container is a way to package a set of related files that make up a workload and move them to a remote location. At the new location, the only compatibility required is a server running the correct version of the Linux kernel.
• The layers of a container workload are sequenced so that they launch in the right order upon deployment. Containers also make it predictable to know how connections to the network, database server, and other resources are made, provided the remote host recognizes container formatting.
• Containers are also considered "a non-compulsory step toward DevOps," as one speaker phrased it at DockerCon. The standards and disciplines they impose make it easier to create applications without worrying about the specific environment in which they're going to run. Once in that environment, some elements of their build, testing, staging, and deployment can be done automatically.
• Containers are not a replacement for virtualization. IT managers may choose containers over virtualization, but the virtualized workload, with its complete copy of the operating system, is a more discrete unit. It can be moved while running around the data center and doesn't need a host with exactly the right Linux kernel.
• In some cases, containerized workloads may be deemed suitable for some applications and virtual machines for others, depending on operational circumstances. Right now, a software-defined data center will be based on virtual machines. But Stevens said Red Hat would continue to work to bring more automation and system management to containers.
Can the trendy tech strategy of DevOps really bring peace between developers and IT operations -- and deliver faster, more reliable app creation and delivery? Also in the DevOps Challenge issue of InformationWeek: Execs charting digital business strategies can't afford to take Internet connectivity for granted.
Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive ... View Full Bio
Google in the Enterprise SurveyThere's no doubt Google has made headway into businesses: Just 28 percent discourage or ban use of its productivity products, and 69 percent cite Google Apps' good or excellent mobility. But progress could still stall: 59 percent of nonusers distrust the security of Google's cloud. Its data privacy is an open question, and 37 percent worry about integration.
InformationWeek Tech Digest August 03, 2015The networking industry agrees that software-defined networking is the way of the future. So where are all the deployments? We take a look at where SDN is being deployed and what's getting in the way of deployments.