Enterprise IT architectures are becoming increasingly abstract. From server virtualization and cloud computing five or so years ago to software-defined networking today, the ability to visualize technologies end-to-end has become quite a mental challenge. Unless you deal with these abstract technologies on a regular basis, understanding the concepts can be a painstaking and time-consuming task.
As CIOs and other less-technical IT managers finally get their heads wrapped around server virtualization and cloud computing, along comes an abstract technology called containers. Containers have been around for a while in the Linux world. But Microsoft recently got into the game when it announced container support in the Windows Server 2016 Technical Preview 3 release. Not only will containers be critical on the Linux side of the house, they will be equally important on the Windows side. That's why it's essential to get a handle on what containers are and the components that make them work.
[Confused about cloud computing price structures? Read Cloud Computing: 8 Hidden Costs.]
Fret not! We'll give you the basic rundown and key terminology you need in order to understand and speak "container" with others.
In reality, from an enterprise data center standpoint, containers are little more than the next evolutionary step in server virtualization. As you may recall, server virtualization is nothing more than creating logically separated servers that operate on the same physical hardware. Each virtualized server is allocated its own CPU, storage, memory, OS, and network resources without ever knowing that it's only given a fraction of the total resources that the hardware can provide.
Containers essentially do this same thing, but at the OS level. Instead of running a separate OS for each virtual instance, the virtual instances share a single OS. Yet, any changes or modifications to the OS are not visible within a different container. Fewer physical server resources are needed, since fewer operating systems will be running. A container, once built, is portable. It can be easily moved around an organization or shared with anyone who wants to use it. So the three key values of containers are simplicity, efficiency, and portability when you compare containers to traditional server virtualization techniques.
From an enterprise IT practicality standpoint, containers initially will help two primary groups of IT employees: software developers and server administrators.
Software developers will latch onto containers because of their ability to deploy identical images across multiple environments for testing purposes. Additionally, container sharing isn't limited to other developers inside an organization. Containers can be shared through public repositories where developers around the world can share various images. The process is similar to how programmers share snippets of code that others may find useful, and thus don't have to reinvent the wheel.
Server administrators are likely the people who manage your virtualized server architecture today. These same admins will be able to deploy containers that act as various standardized environments for production, development, test, QA, and so on. By skipping the need to spin up a brand new OS each time a fresh environment is needed, containers cut out several time-consuming steps.
Now that you've had a brief overview of the benefits of containers, who will likely use them, and why, we'll review key terminology used to describe components of the container architecture.