Editor's note: This commentary, which focuses on Docker Linux containers, was written before CoreOS introduced a Rocket container runtime. Sebastian Stadil will follow up with an assessment of container options now that CoreOS has broken away from the Docker project and offered its Rocket alternative.
Docker is so hot it recently received $40 million in funding that it won't even start spending until late next year at the earliest. It has been positioned by proponents as ending the need for virtualization and is widely regarded as something that's going to disrupt the entire IT industry. Clearly, the rhetoric is overblown.
Further confusing the issue of the true value of Docker technology are the 35,000 apps built on top of Docker in its app store, some of which might or might not disrupt individual areas of the IT industry. Yet rolling it all into a blanket "Docker will eat the world" message doesn't serve anyone's understanding. This is regrettable because it means Docker's actual and tremendous value is being overlooked.
The tradeoffs of containerization
Docker's value is its ability to facilitate repeatable application deployment and execution, which it does very well. It uses Linux containerization, a form of lightweight virtualization that is an alternative to traditional hypervisors. The key difference is that when you use containers, the host and the guests share a kernel, whereas with a traditional hypervisor, the host and the guests don't share anything but the hardware.
[Want more on Docker? See Are Docker Containers Essential To PaaS?]
Containers are more efficient when it comes to performance than regular hypervisors, because you have a single shared kernel. In fact, compared to bare metal, there is no performance loss when using containerization. However, there is a tradeoff for performance gains, which is that Docker containers require that the host and guest operating system use the same kernel (e.g., you can only run Linux on Linux). And containers offer weaker isolation than hypervisors, which means that securing them tends to be more difficult.
Though containers' increased performance over virtualization is often used to prove the hyperbole about Docker's superiority, that's overstating the issue. Containerization has existed for a while, and the tradeoffs really haven't changed at all.
The real revolution
The real revolution in Docker is that it is a new and superior way of packaging applications -- a way that's easy to use, prescriptive, and fast. Docker achieves this with a plaintext file called Dockerfiles that includes declarations such as use this other Docker image as a base; add this, add that; and run this. Using the Docker commands, users convert Dockerfiles into images they can then deploy to any host that supports Docker. The benefit: It works the same everywhere, using a simple "docker run" command.
This is a big deal. Even when using something that is arguably cross-platform (such as Java), you can still run into dependency management issues that will sideline an entire deployment. Using Docker, this cannot happen, by construction. And though there are other application-deployment solutions, none offer the combination of ease, speed, and prescriptiveness that Docker does. However, one has to balance all this with the reality that using Docker as a packaging tool is just another way to deploy an app.
Docker: The new IT gold standard?
In the extreme, proponents will have you believe Docker is a competitive threat to the entire IT ecosystem, starting with virtualization and the cloud. They believe so because they see a future in which every application is "containerized" and deployed to a massive pool of bare-metal resources through a "container scheduler" (early incarnations of those exist in the form of Google's Kubernetes or Apache Mesos) -- thus bypassing both hypervisors and cloud platforms.
Yet this future is still very far out, because substantial technical hurdles remain. For starters, the weaker isolation guarantees offered by containerization (compared to virtualization) remain a security concern. What's more, containerization remains a debatable fit for enterprise environments where the organization can hardly settle on a single kernel due to the existence of substantial Windows workloads. Finally, stateful applications, such as databases, aren't trivial to deploy across a pool of resources. Migrating a containerized workload from one host to another is easy; migrating heaps of data, not so much.
Though docker is a new and superior way of packaging applications, what's important to focus on is that it works in conjunction with many IT ecosystem technologies. For example, it lets users package an application, but that doesn't mean it eliminates the need for configuration management tools.
Docker lets users run an application, but that doesn't mean it eliminates the need for all virtualization, orchestration, and configuration management. It has real value as a packaging tool and is part of an evolving ecosystem. Will it disrupt the entire IT industry? I don't think so. Yet its open, collaborative approach and broader community have the potential to shape a few twists and turns along the way.
If the world weren't changing, we might continue to view IT purely as a service organization, and ITSM might be the most important focus for IT leaders. But it's not, it isn't and it won't be -- at least not in its present form. Get the Research: Beyond IT Service Management report today (free registration required).Sebastian Stadil is the founder and CEO of Scalr, supplier of a front-end, cloud management platform either as a service or for on-premises installation. He is also the founder of the Silicon Valley Cloud Computing Group with more than 8,000 members. He is a member of the ... View Full Bio