Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.
February 26, 2015
5 Min Read
7 Linux Facts That Will Surprise You
7 Linux Facts That Will Surprise You (Click image for larger view and slideshow.)
Docker would like to ride data center containers to fame and fortune in much the same way VMware rode the wave of data center virtualization. The new orchestration tools Docker released Thursday for its Linux container platform show how it plans to reach that success.
Docker is releasing Docker Machine, Docker Swarm orchestration tools, and the 1.1 release of Docker Compose. The three tools together plunge Docker into not only generating containers with ready-to-go workloads, but make it easier for IT pros to move those workloads -- so they can be deployed in some data center environment other than the one in which they were built. Orchestration tools get a workload or distributed application ready for its production environment.
In one sense, Docker is tracking VMware in first stimulating the demand for a new type of workload, and then providing the means to provision, orchestrate, and manage it. The three tools are available as free downloads from their own Web pages to be announced today at the Docker project website, DockerProject.com.
Here's a run-down on the three tools, which address different elements of orchestration.
Still in beta, this automates what were two painstakingly manual steps -- configuring a workload in a container and preparing a target environment to host it. It executes those actions in series of steps triggered by a system administrator's single command. If the app or workload will run in Amazon Web Services' EC2 infrastructure, then Docker Machine will ready an Amazon Machine Image on EC2. It will upload the Docker Engine into the VM, where it will format an application that is sent to it for a container.
Docker Machine comes with drivers for 12 environments, including Amazon. It can prepare a container host on IBM SoftLayer; Microsoft Azure or a Microsoft Hyper-V environment; Google Cloud Engine; an OpenStack cloud, such as HP or Rackspace; Digital Ocean; VMware vCloud Air, a standard VMware vSphere virtualized environment, or VMware Fusion for virtualizing Apple Macs; or Oracle's VirtualBox virtual machine environment.
[ Curious how the world's largest container user, Google, views Docker? See Google: Docker Does Containers Right. ]
For each setting, "the integration between the container and the target environment is already done. You don't have to relearn things for each environment to get a Docker container up and running," said Docker's David Messina, VP of marketing, in an interview.
Machine knows what type of virtual machine runs in the target environment and how to generate one as a host for a Docker container. It knows what Linux kernel is needed by the application and directs the provisioning of the correct system on the host. It can also work with the Joyent cloud's SmartOS open source version of Solaris.
Swarm, which is also in beta, builds the virtual server cluster needed to host a workload that may be divided up between several containers. In concept, Swarm overlaps with the goals of Kubernetes, a Google-sponsored project aimed at establishing a standard way to build a cluster hosting a multi-container application. But where Kubernetes would handle clustering details for a given cloud, Swarm is attempting to be a more general-purpose tool that can provide a cluster either in-house or in a number of different cloud infrastructures.
Docker is keeping in mind its core constituency of developers, who want to use containers as they develop a large, distributed application with many parts. After producing an app, developers can then use Swarm to help move an app out into a new setting, which could be an on-premises data center or a public cloud service.
Swarm makes use of tools like Apache ZooKeeper, Consul, and etcd. ZooKeeper is a central server that tracks the various parts of a distributed application. It can manage the synchronization of the parts and knows how each is configured. Consul is an open source centralized service registry, and etcd is an unstructured data storage system for distributed systems. Etcd can be used as a distributed system that can have a server fail out from underneath it, yet no data will be lost or the distributed application disrupted.
Swarm uses these and other tools to know how the servers it is considering for use are configured and what their resources are. This process helps it decide how to deploy containers on a cluster. "Swarm has an understanding of the host resources tied to it," said Messina.
Swarm works with the Apache Software Foundation's clustering software, Mesos. Integrations are planned for the Amazon EC2 Container Service, IBM Bluemix Container Service, Joyent Smart Data Center, and Microsoft Azure. It will also integrate with Kubernetes itself as it starts to serve as an enterprise or cloud system. Swarm "is very complementary to Kubernetes," said Messina, although in some cases the two may vie to do the same work.
Through its API, Swarm can communicate with the scheduler on a target cluster and start or stop workloads to match the distributed application's needs.
Both Machine and Swarm will come out of beta in April, and go into a product preview phase for several months before becoming generally available later this year.
This third container orchestration tool, Compose, is on release 1.1. Compose uses a YAML file that a developer builds to provide metadata on a given application. (YAML stands for Yet Another Mark Up Language.) With it a developer can use naming conventions and simple declarative statements to say which containers comprise an application and the links they share. YAML allows a logical definition of a multi-part application that can be used to get the app up and running a development environment, for test and quality assurance and for production. Through Compose a new release of a service in one container may be updated as the application runs, with no effect on the other parts, Messina said.
Attend Interop Las Vegas, the leading independent technology conference and expo series designed to inspire, inform, and connect the world's IT community. In 2015, look for all new programs, networking opportunities, and classes that will help you set your organization’s IT action plan. It happens April 27 to May 1. Register with Discount Code MPOIWK for $200 off Total Access Conference Passes.
About the Author(s)
Editor at Large, Cloud
Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive Week. He is a graduate of Syracuse University where he obtained a bachelor's degree in journalism. He joined the publication in 2003.
You May Also Like