Implementing DevOps Without Containers

Too many organizations equate DevOps to the use of containers. Some even stop with containers and miss the full potential of a complete DevOps practice. Indeed, containers can be part of DevOps, but DevOps success can be achieved without the use of containers at all. This article explores how.

Eric Bruno, Lead real-time engineer at Perrone Robotics

March 8, 2017

4 Min Read
Image: Pixabay

One of the reasons to adopt a DevOps approach to software delivery is to remove the bottlenecks in your production deployment processes. For server side software, this often involves many pieces, including:

  • The application environment, such as operating system parameters

  • Third party components, such as application servers, web servers, and databases

  •  Your application software that runs on top

To remove the deployment bottlenecks, DevOps aims to break down the barriers between developers and IT operations staff (hence the name) to foster a collaborative working environment. As a result, development takes part in making sure production environments are in sync with development environments, and all deployment processes are executed together. One good way to achieve this is through the use of containers, such as Docker or Kubernetes. In fact, it works so well the two -- containers and DevOps -- have become synonymous, and many people have built a dependency between them.

However, this dependency is not required: you can implement a DevOps practice without containers. Let’s explore how.

Why Containers Make Sense

Although it’s counter to my point, let’s explore where containers actually help DevOps. Containers are lightweight abstractions to the OS that manage running software. They help isolate processes from one another, impose restrictions on resource usage, and help to package up software dependencies. It’s important to remember that containers don’t replace virtualization because they operate closer to the application level, not the physical level.

Containers can be quite useful because of their efficiency. With them, you can deploy and bring software components online quickly, spin up new instances with lower overhead compared with virtualization, and you can control the application environment more closely. For instance, if developers code and build software within a container, the container and everything within it can be packaged and transported to production servers. The efficiency and automation involved enables DevOps and cloud paradigms very well.

Good DevOps use cases for containers revolve around the need to bring new servers online quickly and often. This is usually the case for microservices deployments, as well as most Agile groups's dev/test cycles in general. Therefore, containers can be very effective for the rapid spinning up and destruction of microservices and dev/test environments. But beyond that, the use of containers in DevOps is more a choice than a requirement, and DevOps doesn’t end there.

Easing Deployment Pain Without Containers

Regardless of the benefits of containers, there are actually reasons not to go with a containerized approach to software deployment. These include:

  • A lack of container skills or knowledge

  • Special application performance requirements (i.e. real-time systems)

  • The use of unsupported hardware of software environments (i.e. embedded systems, specialized or legacy operating systems)

  • Public cloud deployment

  • And so on

Instead of relying on containers for DevOps success, focus on these three enablers:

1. Automation: find processes and tools to enable automation as much as possible. Whether it’s a mainframe application or a microservice, tools can be found to reduce manual effort and their errors.

2. Continuous integration: test software modules, components, services, and so on, together continuously. Don’t wait until the end of the development cycle to integrate and deploy your system.

3. Continuous testing: with continuous integration, ensure that your system is always workable, testable, and theoretically releasable. Testing the results of your development effort is part of the required feedback loop.

Granted, specific build and deployment tools are helpful and often required to reach the level of automation needed to make deployment more routine. However, the biggest gains from DevOps comes from three main organizational efforts:

  1. Constant development build-and-test cycles

  2. More frequent deployments to production (or similar) servers

  3. Direct and immediate feedback loops back to the developers

With these three efforts, software is never built in isolation, components are integrated constantly (not just before a major release), and everyone involved knows what works and what needs to be improved along the way. As a result, development and IT together get almost constant reassurance that what’s being built will deploy and operate as expected. Bottlenecks are removed along the way, and your deployment processes and production environments are tested along with the software. Because activities are done in parallel from the beginning, instead of in stages, there are no surprises at the end of the development cycle.

People: The Key to DevOps Success

The key to success isn’t in the toolset; it’s in the people, the communication, and the metrics. All of this can be accomplished without anynew tools, never mind containers. Why is this so? Because with a DevOps practice, when crunch time comes after weeks or months of developing a new version of software, and deadlines pounce, the one thing you won’t need to be concerned with are the effects of deploying your system to production because you’ve effectively been doing it all along. Continuously.

This is why it’s called a DevOps practice and not a DevOps process, group, toolset, or environment. Containers can be a great addition to your DevOps practice to help manage the production environment, but they’re not a necessity. Instead, focus on the DevOps practicefirst, and use containers where it makes sense.

[Learn more about DevOps when Eric Bruno on May 17 at Interop ITX when he presents the Bimodal IT and DevOps: Speed AND Quality or Speed VS. Quality?]

About the Author

Eric Bruno

Lead real-time engineer at Perrone Robotics

Eric Bruno, lead real-time engineer at Perrone Robotics, is a contributing editor to multiple publications with more than 20 years of experience in the information technology community. He is a highly requested moderator and speaker for a variety of conferences and other events on topics spanning the technology spectrum from the desktop to the data center. He has written articles, blogs, white papers, and books on software architecture and development topics for more than a decade. He is also an enterprise architect, developer, and industry analyst with expertise in full lifecycle, large-scale software architecture, design, and development for companies all over the globe. His accomplishments span highly distributed system development, multi-tiered web development, real-time development, and transactional software development. See his editorial work online at www.ericbruno.com.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights