Containers 101: 10 Terms To Know
Application containers have been around for a while in the Linux world, but now Microsoft is moving them into the mainstream. Here's what you need to know about this evolution of server virtualization.
![](https://eu-images.contentstack.com/v3/assets/blt69509c9116440be8/blt4dd56d5708f04c9e/64cb5364464ef5ad66424fb3/1-intro.jpg?width=700&auto=webp&quality=80&disable=upscale)
Enterprise IT architectures are becoming increasingly abstract. From server virtualization and cloud computing five or so years ago to software-defined networking today, the ability to visualize technologies end-to-end has become quite a mental challenge. Unless you deal with these abstract technologies on a regular basis, understanding the concepts can be a painstaking and time-consuming task.
As CIOs and other less-technical IT managers finally get their heads wrapped around server virtualization and cloud computing, along comes an abstract technology called containers. Containers have been around for a while in the Linux world. But Microsoft recently got into the game when it announced container support in the Windows Server 2016 Technical Preview 3 release. Not only will containers be critical on the Linux side of the house, they will be equally important on the Windows side. That's why it's essential to get a handle on what containers are and the components that make them work.
[Confused about cloud computing price structures? Read Cloud Computing: 8 Hidden Costs.]
Fret not! We'll give you the basic rundown and key terminology you need in order to understand and speak "container" with others.
In reality, from an enterprise data center standpoint, containers are little more than the next evolutionary step in server virtualization. As you may recall, server virtualization is nothing more than creating logically separated servers that operate on the same physical hardware. Each virtualized server is allocated its own CPU, storage, memory, OS, and network resources without ever knowing that it's only given a fraction of the total resources that the hardware can provide.
Containers essentially do this same thing, but at the OS level. Instead of running a separate OS for each virtual instance, the virtual instances share a single OS. Yet, any changes or modifications to the OS are not visible within a different container. Fewer physical server resources are needed, since fewer operating systems will be running. A container, once built, is portable. It can be easily moved around an organization or shared with anyone who wants to use it. So the three key values of containers are simplicity, efficiency, and portability when you compare containers to traditional server virtualization techniques.
From an enterprise IT practicality standpoint, containers initially will help two primary groups of IT employees: software developers and server administrators.
Software developers will latch onto containers because of their ability to deploy identical images across multiple environments for testing purposes. Additionally, container sharing isn't limited to other developers inside an organization. Containers can be shared through public repositories where developers around the world can share various images. The process is similar to how programmers share snippets of code that others may find useful, and thus don't have to reinvent the wheel.
Server administrators are likely the people who manage your virtualized server architecture today. These same admins will be able to deploy containers that act as various standardized environments for production, development, test, QA, and so on. By skipping the need to spin up a brand new OS each time a fresh environment is needed, containers cut out several time-consuming steps.
Now that you've had a brief overview of the benefits of containers, who will likely use them, and why, we'll review key terminology used to describe components of the container architecture.
In its most basic sense, a container is a way to isolate and control applications and their dependencies, such as registry keys, application settings, and descriptors. Multiple containers can operate on the same virtualized operating system, but each container is logically segmented, and settings/modifications in one container do not affect any other container.
Within a container, the abstract layer where all data writes are performed while the container is running is known as the container sandbox. This is where all OS modifications, registry changes, and any applications are installed. All other parts of the container remain untouched.
The container application image is the static portion of the application and any changes that were made within the sandbox layer. Any modifications in the sandbox can either be saved as a new container image or discarded while the original container image remains.
A base container is the container's minimalist operating system layer that the container application image and sandbox run on top of. Upper-layer container changes and modifications can be added or removed, depending on an application's needs. The base container can be thought of as like a house you build with Lego blocks. You can build any type of house you desire, but you always start with the same base as your foundation.
When a container image has been built and completed, it is saved as a reusable container image within a local repository. Additionally, container images can be uploaded to public repositories and shared with whoever you like. A great example of a public repository where users share images is the Docker Hub.
Namespace isolation is the term used to describe the secret sauce of how container software hides files so that every other container believes it is the only application running on the OS. Each container has its own individual namespace, and all registry changes, application installs, processes, and other file modifications are prevented from being seen by other containers.
To prevent one or more containers from laying claim to the lion's share of finite server resources such as memory, network, and CPU, resource governance techniques are used to enforce maximum limitations. So, not only are containers separated at the OS level, which is shared, but at the shared server resource level as well.
When you boil it down and understand the basic terminology, you begin to see that containers are essentially server virtualization, only at the OS layer. Each container is given the perception that it is the only thing running on a clean and untouched operating system.
Yet, in reality, there likely are dozens, hundreds, or even thousands of containers, each piggybacking off of the same operating system. The container system reduces resource needs and simplifies many development processes.
Now that you understand the power of containers, you'll be ready as the concept pop ups more and more frequently in IT meetings and around the water cooler.
When you boil it down and understand the basic terminology, you begin to see that containers are essentially server virtualization, only at the OS layer. Each container is given the perception that it is the only thing running on a clean and untouched operating system.
Yet, in reality, there likely are dozens, hundreds, or even thousands of containers, each piggybacking off of the same operating system. The container system reduces resource needs and simplifies many development processes.
Now that you understand the power of containers, you'll be ready as the concept pop ups more and more frequently in IT meetings and around the water cooler.
-
About the Author(s)
You May Also Like