Docker had its Swarm orchestration product tested against Kubernetes and claims the results show a 5X advantage in speed to initiation.

Charles Babcock, Editor at Large, Cloud

March 9, 2016

4 Min Read
<p align="left">(Image: Docker)</p>

 Siri, Cortana Are Listening: How 5 Digital Assistants Use Your Data

Siri, Cortana Are Listening: How 5 Digital Assistants Use Your Data


Siri, Cortana Are Listening: How 5 Digital Assistants Use Your Data (Click image for larger view and slideshow.)

Docker submitted its container orchestration software, Swarm, to testing by a third party, which claims to have found it up to five times more efficient than Kubernetes, its chief open source competitor.

Docker is the leading supplier of a containerization formatting engine and system for managing containers on their way to production. It has its own open source community, which is active in developing the code. The company is at pains to illustrate the capabilities of its Swarm orchestration product and selected a popular container blogger, Jeff Nickoloff, principal of All In Geek Consulting Services, to conduct the comparison test. Nickoloff is the author of Docker In Action, a book about best practices for using Docker. He is also a developer with experience at Arizona State University, Limelight Networks, and Amazon.com.

The Kubernetes project is popular among container users because it springs from code donated by the world's leading implementer of container use, Google.

[Want to see what an ex-Google leader of Kubernetes development said at KubeCon last November? Read Kubernetes Yields 'Operations Dividend,' Still Working on Scalability.]

"There tends to be a lot of noise around container performance. We wanted to provide some data," said Docker SVP David Messina. The results were published on the Docker website March 9, which happens to be the eve of the KubeCon gathering in London, March 10-11.

One objective of the test was to scale Docker Swarm and Kubernetes up to 1,000 nodes running 30,000 containers. The test would measure how long it took to schedule or orchestrate the 30,000 containers, and how quickly the first was up and running. A second measure would be one of how soon the whole environment was performing at what might be considered its optimum pace.

Docker's Messina said the tests were configured that way because the goal of container orchestration is the same as ordering food in a restaurant. "The goal isn't just to get your order in," he said. "The goal is to get the food on the table."

Nickoloff found that initiating 10% of the containers took Swarm 0.36 seconds and Kubernetes 1.99 seconds. Initiating half of them took Swam 0.41 seconds and Kubernetes, 2.05. Initiating 100% took Swarm 0.59 seconds and Kubernetes 2.73 seconds.

Messina did the math and boasted that the results mean Swarm activated the containers five times faster than Kubernetes, on average. Swarm was actually 5.5X faster for the first 10%. The gap was reduced at the 100% initiation point, with Swarm clocking in 4.6X faster than Kubernetes.

Messina focused on the ability of the orchestration system to get a large number of containers functioning efficiently in Nickoloff's tests. For Swarm, that process required an elapsed time of 18.25 seconds. For Kubernetes, Messina reported the period was 126.93 seconds. That was a measure of the time it took to interrogate each container in the environment and get a response that it was running.

ContainerShip.jpg

Under both systems, getting containers up and running occurs so quickly, and for so many containers, that many operations managers would likely find either product acceptable under today's early phase of container implementation. As containers proliferate, and production systems come to rely on their so-called "burstablility" -- the ability to burst into operation, quickly expanding an application's compute power -- performance statistics will become more and more relevant.

Vendor-produced benchmarks always have to be taken with a grain of salt. They are not measures of how the customer's applications will run in containers. They are an abstraction, partially constructed to illustrate the strength of the vendor's product, and possibly the weaknesses of a competitor's. Docker selected a credible third party to do these tests, but the reader must remember that the tests are paid for by Docker.

Nevertheless, the Nickoloff benchmark supplies a one-mile marker that customers may use as they mix and match their assembly of container products. Docker is trying to be the one-stop shop through which customers may meet all their future container needs. Kubernetes is trying to capture and further develop Google's container orchestration expertise and apply it to general-purpose computing settings.

About the Author(s)

Charles Babcock

Editor at Large, Cloud

Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive Week. He is a graduate of Syracuse University where he obtained a bachelor's degree in journalism. He joined the publication in 2003.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights