Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.
How Kubernetes Came To Lead The Container Management Pack
Kubernetes advocate points to software's roots inside Google, its self-healing features and its container cluster know-how.
January 24, 2017
6 Min Read
Several authorities have outlined recently how software formatted in containers will be an impactful, future technology and how any company planning to be part of the digital economy can't afford to ignore them.
That would indicate containers are about to proliferate, but what about the systems for managing them? Virtual machines would have been virtually useless if they resulted in an uncontrolled "virtual sprawl," as some observers of their spread warned against. In fact, VMware, Veeam and Microsoft made sure that tools to manage virtual machines grew up close on the heels of their spread and adoption.
So what's the case with containers? Where are the tools to manage them? Docker, Mesosphere, Red Hat, Cloud Foundry, and Cloudify each have their own answer to that question and each has won some measure of recognition in the marketplace. But a clear marketplace leader may have emerged and its characteristics and the reasons for its emergence offers insight into where container management is headed.
Kubernetes is that market leader, according to one of the few measures of container management software: a survey by the OpenStack Foundation on which container cluster manager and container deployment system OpenStack users are using. Kubernetes leads that poll, used by 47%; the runner up is Red Hat's OpenShift, 25%; followed by Cloud Foundry, 22%; Mesos, 20%; Docker Swarm, 12%; and Cloudify, 8%.
Perhaps more significant is how many of these users are using their system of choice for production purposes. The other forms of use would be for development or as an early proof of concept project. Again, Kubernetes, leads the way when it comes to production, 31%, with Cloud Foundry moving into the number two position at 17% and both Mesos and OpenShift at 13%. Docker Swarm commands a measily 4% in production, while Cloudify comes in at 1%.
Want to learn how the Kubernetes project continues to work on scalability? See Kubernetes Yields Operations Dividend; Still Working on Scalability.
Kubernetes is only two years old as a project. It is complex and its documentation is sometimes sketchy or lacking. InformationWeek asked a key Kubernetes contributor and occasional Docker critic, Alex Polvi, CEO of CoreOS, why a Kubernetes rival, Docker Swarm, wasn't faring better in the management field when Docker owns the leading container formatting engine. Polvi has a direct interest in Kubernetes success. From the start of the Kubernetes project his company has been heavily invested in it and has one of the first commercial products based on it, Tectonic, container cluster management software. His answer has to be taken in that light.
Polvi answered the question this way: "Docker is the default for container building and packaging. It's never been the leader for managing the container cluster. It seems unlikely that Docker will emerge as the expert on both."
Kubernetes enjoys the position that it does as leading among users who have put containers into production because it comes out of Google, the company that has a decade of experience putting containers into production. Kubernetes isn't Google's first container management system. It started out with Borg cluster management, followed by a system it called Omega, Polvi continued. Kubernetes is actually its effort to revamp Omega into a more flexible, general purpose software layer for container management.
Kubernetes is the system that's stuck as the platform layer for Google and behind the scenes it manages Google Search, Gmail and YouTube.com. In making its Kubernetes code open source in July 2015, it has tapped into the thinking and development skills of container users of many different stripes and has been given a shot at becoming a cloud and enterprise-based system. "Google had an excellent engine going into the project," Polvi noted.
"How do you manage applications at scale?" Polvi asks. "Google has been doing it for 10 years. It has the largest data center footprint of anyone."
There's a related point. By virtue of its early credibility and widespread use, Kubernetes code has been tested in the real world. "It's architecture has become stable and mature. It will add features fast but it will also get boring, like Linux... Like the Linux kernel, it doesn't create problems for your organization," Polvi said.
Among other things, Kubernetes can launch 7,000 containers per second, according to David Rensin, author of the O'Reilly book, Kubernetes, Scheduling the Future at Cloud Scale. In doing so, it knows where to assign them to a node in the cluster; it uses the concept of pods to put containers using the same resources close together on a cluster. It's not simply a workload distributor. It also knows how to schedule when jobs run and how to orchestrate their efficient operation. It swiftly replicates containers when more of the same container are needed to get a job done.
More recently, the Kubernetes control plane has added an API layer where its internal operational APIs can be exposed to and called by an application developer. When an application launch needs to be scheduled on the container cluster, the developer can build in a call to the Kubernetes API. But if he wants to use a different scheduler, he can issue a call to that API instead. This allows many additional systems to be built on top of the Kubernetes platform, as pointed out by this Kubernetes guide.
Polvi also made the point that Kubernetes, with the Google experience behind it, is self-driving software. It restarts itself in the event of a stoppage. It can update itself while running. It can place itself on a server cluster, replicate itself and scale itself. It does many of these same things for the containers under its supervision.
The analogy that Kubernetes is like the Linux kernel may be a bit of a stretch. Kubernetes started inside Google in 2014 and came out as open source code in July 2015. The project doesn't yet have the track record of consistent and steady management and reliable releases that Linux slowly gained over many years.
But it's like Linux in the size of the project and its variety of contributors and the vigor of the debate that goes over adding something like another API. There's a strong sense of keeping it true to its central purpose – container orchestration and management – and not getting bogged down in features and additions that benefit a limited number of container users.
"Kubernetes enable a new class of applications that weren't possible before," something like the way the iPhone enabled the online ride services of Lyft and Uber, Polvi said. It's becoming embedded in clouds and data centers as a new piece of the infrastructure, one that can take advantage of containers at scale and one that will allow a more self-driving software infrastructure to be created.
About the Author(s)
Editor at Large, Cloud
Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive Week. He is a graduate of Syracuse University where he obtained a bachelor's degree in journalism. He joined the publication in 2003.
You May Also Like
Keeping Hackers Off Every Edge
*State of ITSM in Retail
A revolution in healthcare IT service management: How automation is driving improvements in a complex environment
Edge Computing 101 Practical Insight for IT and Ops Leaders
Checklist: Top 6 Considerations to Optimize Your Digital Acceleration Security Spend