The Docker container platform can now supply software-defined networking (SDN) to an application that gets deployed as multiple containers on multiple hosts. The SDN capability ensures that the distributed containers can communicate with each other and remain connected, even if some are moved.
In addition, the Docker platform has enhanced its three core orchestration tools, Docker Compose, Machine, and Swarm, adding intelligence and features to the ways they deal with multi-container applications.
The announcement of these moves by Docker Inc., the company that founded the Docker project and has now contributed its code to the Open Container Project as part of the Linux Foundation, came on the opening day of DockerCon, the annual user conference for Docker customers and developers.
Docker started out as a convenient packaging system for new code, tapping into common features of different Linux distributions sharing the same kernel. By recognizing that fact, Docker containers started to occupy the niche between development and operations, providing a convenient way for code to be handed off from developers, to test and quality assurance, to production. By including the parts of Linux needed by the application from outside the kernel, the package was easy to pick up and run in a variety of environments.
[Want to learn more about how Docker and Rocket supplier CoreOS reached agreement on a common specification? See Docker, CoreOS Bury The Hatchet For Container Spec.]
On the opening day of DockerCon 2015, however, it became clear that Docker Inc. was shooting for considerably more than a packaging system for its popular container-formatting system. CEO Ben Golub, in an opening keynote, said Docker has started "to change thinking about the data center." Through containers, parts of the data center can be distributed out into business units, where a server can serve both as "compute power and radiators" in climates where the heat dissipation is a plus.
But mainly, he said, Docker is expanding its reach with features that make it an easier unit to manage in the data center, especially when an application is composed of multiple containers.
Solomon Hykes, who founded the Docker project in his mother's basement in Paris while running DotCloud in 2011, said Docker's ability to assign logical units of the network to containers -- its SDN overlay of existing IP networks -- amounted to a big step forward for container operations. Docker container networking has previously been a sticking point. If the container host failed, or the Docker daemon in the background stalled, the containers disappeared and could only be restored with new IP addresses. That restoration that left them invisible to the systems they had previously been connected to.
"Earlier this year we went looking for help," he said. Docker announced it would acquire SDN startup SocketPlane in March, "and three months later ... we've reinvented networking for Docker. We've built in some features we think you're going to love," Hykes told DockerCon attendees Monday morning in San Francisco.
SocketPlane's approach creates a virtual network by using Open vSwitch, a software switch that can be embedded in a virtual machine or container, to build Virtual Extension LAN or VXLAN tunnels. The IETF has approved VXLAN as a standard. It was created by VMware, Arista, and Cisco in 2013. The tunnels are logical units of existing IP networks and can be used to connect containers. Each container gets an IP address that stays with the container regardless of where it migrates.
Docker's networking enhancement also relies on a Domain Name System in the data center, another open standard. With DNS and VXLAN, Docker has been able to supply "multi-host networking out of the box" for Docker containers and to allow the deployment of complex, microservice applications as a set of containers.
"Assemble virtual networks on any topology. Any network available on one machine is available to any other machine," Hykes told the general session. The networking is available on existing networks without rewriting applications or adding new networks specifically for containers. The VXLAN-based approach allows microservices to be placed on any node of a Docker Swarm, its container-cluster-building software, and be touch with other microservices that are part of the same application.
Containers may be a new part of data center operations, but the multi-host networking will make them easier to network and manage throughout their lifecycle. Developers can build an experimental network and let their distributed application run with it. At a later stage, the network operations team can apply policies that add availability and security. The application itself doesn't need to be fiddled with to give it proper networking through the different stages of its lifecycle, Hykes said.
The application can also move out of the enterprise data center and into the public cloud, carrying its networking characteristics with it, Docker said in the announcement.
In other Docker developments, Docker Machine, Compose, and Swarm have been integrated into the new networking capability. Compose is used to define the containers in a distributed application and how they're connected. Swarm has been integrated with the Mesos cluster workload scheduling system. Developers can begin with a small Swarm cluster, and operations at a later stage can plug the application into a Mesos cluster with hundreds of nodes managed by the single scheduler.
In a similar vein, Docker can work with Amazon Web Services' EC2 Container Service. A multi-node Dockerized application built with Compose and Swarm can be deployed to the AWS system's cluster management.
In yet another development, Hykes described a plug-in architecture for the Docker platform to allow developers to tie in their own tooling through four connection points, with more to come. The initial plug-in points are for storage volumes and networking. Third parties, including Cisco, ClusterHQ, Microsoft, Midokura, Nuage Networks, Project Calico, VMware, and Weave, take advantage of the plug-in point to tie their own systems into the Docker platform.
The plug-in architecture means the platform's new SDN networking capability can be used or swapped out in favor of a third-party's SDN networking, Hykes explained. Developers with their own tooling will be able to plug in, and hundreds of Docker technology partners will have the opportunity to plug in their tooling as well, he said.
The announcements moved Docker deeper into the data center, giving operations staff more reason to accept applications sent their way in the form of containers. Docker, in effect, is no longer building a container system for Linux (and eventually Windows) applications. It's building a new DevOps system, tying developers more closely to their operational counterparts and resolving some issues that have persistently plagued new code deployments.
Golub said in his remarks, that, with developers continuing to flock to the Docker standard and 40,000 projects using Docker listed on Github, Docker is no longer a packaging system. "[I]t's a movement ... Remember, this is only the beginning."Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive ... View Full Bio