Over the past few years, DevOps has become increasingly commonplace among enterprise IT teams. According to the 2017 State of DevOps Report from Puppet and DORA, as of just three years ago only 16% of the IT professionals surveyed had worked on DevOps teams. This year, that number increased to 27%, demonstrating how quickly the approach is catching on.
A separate 2017 survey of 1,770 senior business and IT executives commissioned by CA Technologies found that 87% of organizations had implemented DevOps, at least to a small degree. And, those organizations that had implemented DevOps reported significant benefits. In fact, 74% experienced improved customer experience, and 77% saw improved employee recruitment and retention. In addition, the CA report says DevOps resulted in a 43% improvement in employee productivity, a 40% improvement in new business growth and a 38% reduction in IT-related costs.
Those sorts of statistics are likely to encourage more organizations to begin investigating DevOps. However, for newcomers to the concept, DevOps can be something of a mystery.
There is no industry-standard definition for DevOps. Instead, it's a loosely defined approach where developers and IT operations personnel collaborate more closely. It involves automating processes to achieve greater efficiency and changing IT culture to favor short iterations and frequent software updates.
As the approach has caught on, DevOps practitioners have developed their own terminology and jargon that can be difficult for the uninitiated to understand. This slideshow examines 12 of the most important DevOps terms and explains them in simple language.
DevOps began as an extension of the agile software development movement. In 2008, Andrew Shafer and Patrick Debois gave a presentation where they discussed applying agile principles to IT operations. In that presentation, they coined the term "DevOps" to describe their ideas about agile infrastructure management.
Agile software development has its own manifesto and a well-defined set of foundational principles. In a nutshell, this methodology advocates short development cycles, delivering software early, making updates very frequently, embracing change, collaborating closely with business managers and end users, face-to-face conversations, self-organizing teams, simplicity, and continuous improvement. These principles allow developers in an agile environment to deliver software more quickly and respond to changes efficiently. DevOps teams seek to apply these same principles throughout the IT organization.
To increase their efficiency, DevOps teams use automation software to minimize the number of tasks they must perform manually. For example, when developers need to set up new development, test or production environments, they can use an automation platform to set up the environment for them. This saves time by eliminating the need for operations to configure the servers manually, and it helps reduce mistakes by standardizing the server configurations. Automation also makes it much easier to scale up or down as necessary. Popular DevOps automation tools include Chef, Puppet, Ansible, Jenkins, Vagrant, Gradle, SaltStack and others.
Within IT, configuration management refers to the process of keeping track of the hardware and software under IT's control. It involves monitoring information such as which version of which software is installed on which systems, device serial numbers, device ownership, and which users can use which printers.
As a discipline, configuration management has been around far longer than DevOps. But DevOps practices have changed traditional configuration management practices, putting much more emphasis on automation to handle these mundane tasks.
Containers, particularly Docker containers, have become a very popular among DevOps teams as a way to simplify application deployment. A container packages together an app with all of its dependencies, that is, all the other software it needs to run properly. As a result, no matter where you deploy a container — in a public cloud, in a private cloud or on a server in your data center — it will always run the same way, and you can easily move a container from one environment to another (which is a huge benefit in hybrid cloud environments.)
In addition, a container also isolates an application from the other applications running on the same server. That makes it easier to run multiple applications on a single server or cluster or servers. Container technology is similar to virtualization, but containers are lighter weight (they don't use as many system resources) and do not require a hypervisor.
Continuous Delivery/Continuous Integration
In discussions about agile development and DevOps, the word "continuous" comes up a lot, often in reference to continuous delivery or continuous integration. Both these phrases relate to the core agile principle of frequent updates.
More specifically, organizations that practice continuous delivery (CD) have short development cycles and frequent testing so that their code can be ready to release to end users at almost any time. When taking a CD approach, DevOps teams might release software updates to end users every week or two — or even on a daily basis.
Continuous integration (CI) is similar to but slightly different than CD. In CI, developers who are working on the same project all merge their code together at least once per day (and sometimes much more frequently than that). This practice helps minimize the problems that can occur if multiple people are all independently working on the same code at the same time.
DevOps teams often use CI/CD applications to help them track their pipeline of software currently in development.
In much the same way that DevOps took the principles of agile development and applied them to operations and infrastructure, DevSecOps takes the principles of DevOps and applies them to security. DevSecOps is a newer concept, but it is quickly growing in popularity.
In much the same way that DevOps blurs the lines between development and operations by encouraging closer collaboration, DevSecOps blurs the lines between security and the rest of IT. The ultimate goal of DevSecOps is to make everyone in the IT organization responsible for security throughout the application lifecycle — a mindset that requires a significant cultural change in most organizations.
Recently, trends like software-defined networking, software-defined storage and even software-defined data centers have become all the rage. In all these cases, control over the infrastructure — whether that infrastructure is networking gear, storage arrays or an entire data center — has been abstracted away from the hardware and is instead controlled by software.
These software-defined trends are examples of infrastructure-as-code, an umbrella term for using code to control the hardware in your environment. Infrastructure-as-code is popular among DevOps teams because having programmable infrastructure makes it easier to use automation to manage and configure your systems and devices.
An iteration is simply a process that gets repeated over and over again. Agile and DevOps teams often talk about having short iterations. This is just another way of saying that update cycles are short.
Before agile and DevOps became popular, developers often worked for months or years on major software overhauls. But with the short iterations of agile and DevOps, developers might work for only a day or two on changing just one feature in a piece of software before pushing that new feature out to end users. The advantage of this approach is that it gets improved software into the hands of end users more quickly.
Microservices architecture is an approach to designing applications where the software is comprised of a whole bunch of small, independent pieces, or services. When you design software in this way, as opposed to creating one huge monolithic application, it's easier to make tweaks and updates to the individual pieces without causing disruption to the application as a whole. It also makes it possible to reuse or share services among several applications.
Microservices architecture isn't necessary for DevOps, but the two often go hand in hand. If your philosophy is to make frequent, small changes to your applications, it's easier to make those changes if the application is broken out into small, independent pieces that can be updated on their own.
Serverless computing and function as a service (FaaS) are the same thing. Both terms refer to a type of cloud computing where developers don't have to worry about the underlying infrastructure for their applications. Of course, these cloud services aren't really serverless — the application still runs on a physical server in a cloud computing data center somewhere. However, it feels serverless to the developers, because they don't have to configure, optimize or manage the underlying infrastructure. Instead, they just write code.
Many DevOps teams find serverless computing attractive because it serves as an additional form of automation and allows them to become even more efficient. Some people even refer to serverless as a form of "NoOps" because instead of the developers working closely with operations and infrastructure professionals, developers don't really have to think about infrastructure at all.
Sometimes developers are in such a hurry to get software out the door that they take shortcuts. Instead of doing what will be best in the long run, they do what is fastest. The obvious problem with this approach is that it may generate extra work in the long run because they have to go back and do things the right way (a process developers call "refactoring").
This extra work is called "technical debt." Because of their emphasis on speed and efficiency, DevOps teams are at high risk for incurring technical debt. In much the same way that businesses sometimes need to take out financial loans in order to expand or grow, DevOps teams might sometimes need to incur some technical debt in order to meet deadlines. However, they shouldn't use the need for fast delivery as an excuse for poor code quality, and they shouldn't lose sight of the fact that technical debt will need to be repaid.
All software needs to be tested before delivery to end users, but organizations can take a lot of different approaches to that testing. In unit testing, small pieces of an application are tested independently. Then, once all the individual pieces are working properly, the team can test the application as a whole.
This approach is popular with DevOps teams because it simplifies testing automation, and because it can speed up the testing process. In addition, it fits in with the overall principle of tackling small pieces of work that can be completed quickly. However, like other DevOps practices, this approach to testing requires a different mindset than traditional testing, and it requires a cultural change that can be difficult for some organizations.Cynthia Harvey is a freelance writer and editor based in the Detroit area. She has been covering the technology industry for more than fifteen years. View Full Bio