Containers Help IT Move From Pet Mode to Cattle Mode
Microservices are a fundamentally different way of developing code. IT can increase operational resilience with containers.
You may be wondering what we mean by moving from pet mode to cattle mode.
When a family pet gets ill or injures itself we rush it to the veterinarian willing to spend untold thousands to help it heal. After all, it’s a member of the family!
People who raise cattle have a very different perspective on their livestock. Should one become ill or injured, it’s put down, terminated. End of story. Sad perhaps, but true.
This is analogous to the difference in the way developers treat traditional monolithic applications compared to the way they handle code and assets residing in containers. The legacy application is their baby, nurtured over a long period of time. Should anything go wrong, especially something that halts production, both heaven and earth will be moved if necessary to restore the application to health.
Conversely, the container is seen as expendable. If it has a problem, kill it and run another to perform its function. End of story.
From the perspective of operations and users this can be seen as a boon! When the big old app gets into trouble everything and everyone stops. When a container fails, nobody even knows it happened. Another one, identical to the original, takes over and everybody continues to have a good day. That’s what we refer to as “resilience.”
DevOps: The Obvious Beneficiary of Containers
By their very nature, transitions are the likeliest points of failure in any process. The transition from application development to “go live” and handing responsibility over to operations is the first real opportunity to experience operational failure of an application.
The typical developer of classic monolithic applications has never been required to pay much attention to storage. Their anticipation is that storage will be adequately provided by Ops once they are given control.
Microservices are a fundamentally different way of developing code. Many required functions are coded and contained separately. Each container holds not only the code but all of its runtime dependencies, including libraries and other binaries. All containers may share a common kernel, although they don’t have to, and a common operating system. By packaging each container in this way, each microservice is completely autonomous in its container, protected against conflicts with other processes that may rely on the underlying operating system.
But Wait, There’s More…
Another significant advantage for developers is that when working with containers, they are able to specify required characteristics for the storage that will be used when the application goes operational. This contributes to the maintenance of far more standardized environments, and standardization is one of the best strategies for increasing resilience.
Operations may also choose to run the containers and the storage on the same x86 server, further increasing resilience by keeping everything together on a single control plane. This allows for distributed storage across servers, which traditional storage appliances cannot do.
When the Ops team receives an old-fashioned application, it must make certain that all of the runtime components required by the application are properly made accessible to them. This is where the greatest opportunity for initial failure lies and often results in many lost cycles working with Dev to reinstate the necessary components.
Since containers are packaged with everything they need, this chance for failure is eliminated. And this is just the beginning.
A typical monolithic application running on a typical operating system, whether physical or virtualized, may be halted by any one of a number of conditions. More often than not, when this occurs the entire application goes down and must be restored. No functionality is available to anyone until that is accomplished.
In a container-based environment, any component may encounter difficulty and halt. That does not implicate any of the other containers in any way. They continue to function. Developers are able to build in redundancies in separate containers that can perform what amounts to a minute fail-over from one microservice to another. This makes container-based applications somewhat self-healing and far more resilient than any monolithic application ever could be.
Open Source as a Way to Resilience
The operating system constructs that make up the concept of containers have been around for at least a decade. They have become democratized through open source technologies such as Docker and Kubernetes, two open source communities with large contributions from Red Hat. Deploying open source technology on industry standard hardware as opposed to closed software on monolithic hardware, further reduces risk and increases resilience. Red Hat has been a trusted advisor in the data center, a leader in OpenStack development, and continues to support the mainstream adoption of containers.
Irshad Raihan is a product marketing manager at Red Hat Storage. Previously, he held senior product marketing and product management positions at HP and IBM. He is based in Northern California and can be reached on Twitter @irshadraihan. View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Red Hat is the world's leading provider of open source software solutions, using a community-powered approach to reliable and high-performing cloud, Linux, middleware, storage and virtualization technologies. Red Hat also offers award-winning support, training, and consulting services. As a connective hub in a global network of enterprises, partners, and open source communities, Red Hat helps create relevant, innovative technologies that liberate resources for growth and prepare customers for the future of IT. Learn more at redhat