Developments in the last two weeks have made it clear that containers are not just a great way for developers to package up code and move it around. They're becoming secure, reliable vehicles with which enterprise IT operations and cloud users can move their code around as well.
As their code-handling strengths grow, containers are no flash in the pan. They're here to stay and will enjoy an expanding role not just in test and dev but also in IT operations, although there's still the question of whether they need a virtual machine wrapper.
One of the main obstacles to taking advantage of containers' efficiency in isolating workloads has been the worry about their security. A container needs root privilege on a host server for the Docker daemon to be able to start and stop it. But root privilege gives the container (and whoever owns it) access to all the host server's resources, and that's one place where opportunity for mischief lies.
Another security concern is how containers request Linux operating system services through Linux's Syscall interface, composed of about 130 commands. It's rare, but some application code configurations have produced an unexpected twist causing a command to overstep the intended isolation of the container in requesting host services.
That large number of commands also leaves a large attack surface where one of the commands might be tampered with by an intruder. "The problem is that the Syscall interface is so broad," Craig McLuckie, senior product lead for Google Compute Engine, saidin an interview earlier this year.
[As AWS, IBM, and Microsoft clouds innovate like startups, smaller players can't compete. See Forrester: In the Cloud, the Big Get Bigger.]
By putting the container in a virtual machine wrapper, the application ends up communicating with the operating system and hardware through a smaller gateway. A KVM hypervisor or a slenderized hypervisor of any brand, called a microvisor, can limit to about 30 the number of commands it needs to work with. Furthermore, continuous monitoring of that limited number is a well-established art and helps protect against tampering.
VMware said November 16 that its Photon Controller, a software governor for supplying services to distributed Docker containers, will get a microvisor in 2016. The microvisor will be able to apply a protective wrapper to a container without interfering with some of its most favorable operational characteristics. While granting protection to the container, the microvisor will only require about 20MB of server memory and will allow the container to spin up in a fraction of a second -- slightly slower than unwrapped containers, but much faster than a virtual machine.
Whether this approach will catch on remains to be seen, but it has tackled the problem of container security by addressing and sharply reducing the issue of virtual machine overhead. The microvisor will be based on VMware's ESX hypervisor.
Intel first demonstrated the ability to put a virtual machine wrapper on a container with its Clear Container initiative unveiled at the OpenStack Summit in Vancouver in May. It did so using a microvisor based on the open source KVM hypervisor.
Emphasis On Security
With or without the virtual machine wrapper, containers are becoming a more secure way to move code across the data center or across the country. Docker addressed the root privilege issue in the 1.9 experimental release of its system, demonstrated at DockerCon this week in Barcelona. Root access is still provided to the Docker daemon starting up containers in the background, but Linux namespaces are used to separate the daemon from individual containers and their users.
IT operations staff people will still have root privilege access to a Docker host, which they will need to administer it, but those privileges can be made quite specific to the administrator or departmental organization. Instead of everyone who comes to the host having root access to everything, only selected and named parties will have root access when the feature becomes part of the mainstream.
Security isn't just associated with containers running on a host. The code that is built for them needs to be verified as coming from the intended party. For that purpose, Docker made a key fob or YubiKey 4 device available to confirm that the code allegedly coming from a named developer is accompanied by his digital signature. The device plugs into the USB port of a laptop or workstation and reads the developer's fingerprint for individual identification.
Both Docker and CoreOS have implemented another security feature: the inspection of container images in their repositories and a check of what is there against a listing of open source code modules known to contain vulnerabilities. The scanning isn't looking for malware or evidence of intrusion. It's just running a quick check on whether the code contains potential vulnerabilities or exposures. Human administrators previously had to perform the checks and know when a new vulnerability had been announced in a particular code module. With the rapidly expanding libraries of open source code used in next-generation applications, that amount of change was difficult for any single human to keep up with.
Container management keeps getting more sophisticated -- and sometimes simpler. For example, Rackspace launched its Carina service on Oct. 27. At the direction of a customer, Carina pulls a Docker image from a repository, such as the Docker Hub, spins up a container cluster on which to run it, and launches the container. Rackspace monitors and manages the cluster to keep it running. This is a kind of hybrid cloud service/managed service that simplifies several steps for a Docker user.
Google's open source project Kubernetes is also making strides. Brendan Burns, Kubernetes project lead, announced Kubernetes 1.1's release November 10, and illustrated how a Kubernetes cluster can maintain its ability to handle a million queries a second while rolling out a live update across its servers. Kubernetes keeps scaling to larger dimensions and getting more management features as the project matures.
Red Hat said that its OpenShift 3.1 platform-as-a-service, announced Nov. 9 and available at the end of the month, will include Atomic Host Platform for launching and managing container lifecycles. Red Hat joins IBM, Google Container, and Docker in building out their respective container management platforms to start handling the details of container startup, scaling, maintenance, and tear-down.
HP Enterprise entered the burgeoning container field November 16 by announcing its online Helion Development Platform 2.0 with support for Docker. The move brings one of the last major technology providers into alignment with Docker's broad acceptance.
In addition to its new security features mentioned above, on November 17 Docker announced its new Universal Control Plane in preview release. After Docker containers have been built, Universal Control Plane will aid their deployment either on-premises or to the cloud service of choice.
In several areas, what Red Hat is doing overlaps with what Kubernetes and Docker are doing to make containers more deployable and manageable. It's an area where product boundary lines are unclear and rapidly advancing with additional feature sets. But that fact alone testifies to what is now an enormous interest in containers, their possible marriage with virtual machines, and tooling that will make dealing with hundreds or thousands at a time a less daunting prospect.
**New deadline of Dec. 18, 2015** Be a part of the prestigious InformationWeek Elite 100! Time is running out to submit your company's application by Dec. 18, 2015. Go to our 2016 registration page: InformationWeek's Elite 100 list for 2016.