DevOps Fails More Often Than Not – Here’s Why

DevOps presents an opportunity to build better software, but there's no guarantee of success. A look at how to do DevOps right.

Guest Commentary, Guest Commentary

October 18, 2017

7 Min Read

Every now and again, there comes an idea that forces organizations to reevaluate the status quo. This change revolutionizes an industry by pushing its principles across current industry barriers. In terms of software methodologies, it's the best thing since sliced bread. Every software organization has been either preaching or reading about it. I'm talking about DevOps.

DevOps enables an enterprise to achieve greater efficiency and productivity by adhering to principles that aim to create more dependable releases and ultimately a closer alignment with business objectives. If you are reading this, then more likely than not, you are associated with the software industry. As in most business archetypes, the goal is simple: to make money, with software that yields revenue. Agile methodologies are the Holy Grail in terms of software delivery and DevOps IS a move in the right direction. However, be careful, implementing a DevOps culture isn’t a straightforward, cookie-cutter process. 

DevOps is a great tool when it works, but before you get too far down the rabbit hole, here are some common observations on failed DevOps experiments that you may want to take note of:

It’s only a means to an end. Organizations that have successfully adopted DevOps methodologies look to improve quality through processes generated by their respective experiences, ingenuity, and cultures. That being said, a process is only as good as its return. Do not get caught up in the process itself – be focused on the outcome of your  DevOps project, if it works, stick with it, if it doesn’t, try something else. Be focused on the end game: value.

Start with software that already works. DevOps should help your organization create better software, but installing a DevOps culture will not cure your organization’s software and make it function. Sometimes it’s best to let dead things stay dead and start with something that works from the start. This is neither the time nor place to create your own Frankenstein Monster. An organization uses DevOps to help create better software; DevOps does not create software. 

DevOps has no standard technology stack. If there is a standard technology stack for creating a DevOps culture, I want to package it and sell it at a premium. Many technologies offer pieces of what an organization would use for implementation, but these are specific functions that need to be orchestrated into controlled flows. Tools such as Jenkins, Bamboo, Travis CI, QuickBuild, CircleCI, offer some coordination, bringing continuous integration and, in some cases, continuous deployment to the table, but this model starts to fail once you start incorporating various technologies and teams from across the organization into the mix. 

Visibility, auditability, and control become very big issues when applied across the enterprise. With the onset of newer technologies such as containerization and microservices, the potential grow exponentially. Not only do newer technologies require new stacks, but teams must coordinate both new and old technologies to get full visibility into their implementations. Adding to the mix, DevOps is not limited to distributed systems. In fact, the mainframe is starting to develop its own DevOps movement as well. I’ve seen it, it works. There is no "one size fits all" technology stack. You can get close, but ultimately, the stack will vary based on the implementation because teams have multiple choices for every type of software architecture available. Whether it is Docker, Kubernetes, Openshift, Mesosphere, etc., for container management, or WebSphere, IIS, Tomcat, etc., for application servers, or multiple choices for operating systems, there are multitudes of technologies available and there is a need to have a way to orchestrate all these integrations in accordance with industry and organizational requirements.

To know DevOps, you have to actually do DevOps. Three words haunt my dreams: The Phoenix Project. Yes, Gene Kim is the man when it comes to evangelizing DevOps -- his books have flown off the shelves and thousands have flocked to listen to his talks on DevOps -- the guy knows his stuff. But no, reading the book or any other one does not guarantee that you now possess the skills or knowledge necessary to get a DevOps culture up and running. It’s a great introduction and sneak peek into the world of DevOps, but make no mistake about it, DevOps is hard work you must both know and do. 

There is no set organizational standard. What makes things even more difficult is that there is no de facto standard in terms of how DevOps should be implemented from an enterprise point of view. There are multitudes of flavors, and even more ways of twisting things in making sure the implementation fits what the enterprise needs.

Building microservice applications does not mean you are doing DevOps. Yes, the technology is awesome. Yes, I like saying my applications run in Docker containers as well as the next guy's, but just because your dev team is building microservice-based applications, does not mean you have successfully implemented a DevOps culture. Done correctly, DevOps makes life much easier when designing and implementing microservice architectures, but you don’t necessarily need it to get off and running with this technology. This is one of the biggest misconceptions out there. You don’t have to be running development in a DevOps culture to develop microservice-based applications, but looking at how things are implemented from a microservices point of view, making the transition to a DevOps culture can hinder an organization’s development cycle at the very beginning of the change. 

Technology is only a fraction of the solution. One thing remains consistent no matter which methodology your organization: Technology is limited only by its implementation. Implementations require participation from the entire organization and its culture. Packaged software is just a set of tools to be used to achieve a certain goal or purpose. People are the key to a successful DevOps implementation. Whether it be through training, research, or plain discussion, there are always opportunities to better understand why we should do things a certain way, how we should perform an action to get the task done, and what is the goal at hand. Too many times, I’ve seen intelligent people make mistakes because there was a break in communication. Technology is the easy part. The battle comes with communicating and coordinating across the organization. Get everyone on the same page and listen to input from everyone on the team. Incorporate as much input as you can but do not lose sight of the goal. Leaders lead, managers manage.

Set your sights on creating value. Find out what your organization defines as value. What are the goals? Is the organization on target to reach its goals? From there, look at the drivers behind your findings. What do you need to do as an organization to reach the goals? Focus on what your organization can do to attain those goals. Be constructive. Focusing on what not to do causes an organization to create barriers across the enterprise, and that’s the last thing you want. Encourage positive change. Reward individuals for taking a chance at something new, especially if you have the potential to win big in the end. Most of all, create an environment where people are encouraged to make a positive change.

DevOps is an evolving process. An article written by Gordon Haff gives some great insight into how DevOps is transforming the way organizations are handling the requirements DevOps puts on their teams. There are countless success stories of DevOps transformations out there. Organizations like Netflix, Capital One, Target, and Nordstrom are just a few examples. Take the time and invest in learning about DevOps, its application, and how to successfully bring its practices into your organization. Who knows, your organization could be the next big success story. The only way to know is to try.

Mark Tomcza graduated Northern Illinois University with a degree in economics. He worked on modeling employee benefit structure design for Aon. Following his transition into the world of information technology, Mark landed at Fidelity National Information Services where he helped run software implementation and delivery which included client-side operations for FIS’ largest client base. Mark also served as DevOps Lead for The Vitality Group where he helped create and run application and infrastructure projects.  Mark serves as a Global DevOps Solutions Architect for Electric Cloud. He enables clients to reach new heights by leveraging the power of existing business ideals, promoting complimentary organizational shifts in implementation and pairing them with cutting edge technologies. In his free time, Mark enjoys the outdoors, construction projects, creating art, screenwriting, and travelling.

About the Author

Guest Commentary

Guest Commentary

The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT professionals in a meaningful way. We publish Guest Commentaries from IT practitioners, industry analysts, technology evangelists, and researchers in the field. We are focusing on four main topics: cloud computing; DevOps; data and analytics; and IT leadership and career development. We aim to offer objective, practical advice to our audience on those topics from people who have deep experience in these topics and know the ropes. Guest Commentaries must be vendor neutral. We don't publish articles that promote the writer's company or product.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights