IT leaders need to stop lying to themselves. Sure, agile development and virtualized datacenters help them deliver better results. But have technology organizations really made the leaps necessary to improve IT reliability and, even more important, IT's ability to pounce when the business sees opportunity? Instead, IT organizations struggle to keep up with never-ending changes to their tech environments, especially when installing and upgrading applications. On one side, the IT operations zealots want to keep the environments as stable as possible, because they're judged on uptime and cost. On the other side, the crazy developers want to constantly change or add apps, because they are praised for pushing new features to customers, partners, and employees.
The desire to resolve this tension explains the growing love fest for DevOps, an IT methodology that promises improved reliability and efficiency, lower costs, faster response times, and better communications among teams. Twitter, Netflix, and Facebook say they wouldn't be able to implement their tech strategies without DevOps. Eight of 10 companies in our InformationWeek DevOps Survey adopting DevOps approaches say they've realized or expect to see at least some improvement in app deployment speed and infrastructure stability as a result.
Think of DevOps as agile software development for the entire IT life cycle. Teams not only write code in short iterations, but they also test and even implement it in similar bursts, using version-control tools and more highly automated datacenter configurations. DevOps is meant to blow away the mentality of developers writing code and then throwing it over the wall to the datacenter team to figure out how to efficiently run it. In that way, DevOps has its roots in the lean manufacturing world, which fights the problem of engineers designing gear that the factories can't afford to build.
What's not to love, right? But while 75% of tech pros know about DevOps, according to our survey, only 21% of those familiar with it are using it, though another 21% say they expect their organizations to adopt DevOps principles within a year. Those organizations with no DevOps plans say other priorities take precedence, there's no demand for what DevOps promises, or they lack the resources or expertise to implement it. One in five blames confusion around the DevOps concept. "The term sounds like a job role and not a process, so many manager/executive-level people don't want to hear about it," one tech pro said. One-fourth cite lack of cooperation by IT operations, developers, or both.
Other survey respondents just aren't wild about the results from DevOps. "DevOps is a mixed bag," one respondent said. "We have issues with quality control." While 31% have realized or expect significant improvement, for example, 51% say it's only "some improvement." However, that leaves a mere 3% who have seen or expect no improvement, plus 15% who think it's too soon to tell.
In concept, DevOps is hard to argue against -- who's against cooperation, speed, and stability? The goals most often cited in our survey are to deliver application updates faster with less downtime, improve IT's ability to track and respond to changes in the infrastructure, and get more visibility into network and application performance. But as the performance numbers in our survey suggest, getting blockbuster results from DevOps might be more difficult than it sounds. That's because DevOps is a methodology, not just a technical tool that IT implements. It takes significant cultural changes along with adoption of technical elements such as automated datacenter configuration.
The role of automation
The critical technical foundations of DevOps include software version control, automated configuration management of IT infrastructure, automated virtual machine deployments, and network management. Our survey finds the three most-cited tools used in DevOps are custom scripts (52%), Puppet (29%), and Chef (20%). Puppet and Chef are tools that provide automated configuration management. As an example, let's look at how an application deployment to a virtualization cluster or even a cloud provider such as GoGrid or Amazon Web Services happens now and how it looks different under DevOps.
Without DevOps, an IT operations team meets with the development team to find out how much storage it needs for a new app, which components the application uses (Apache this, Java that), which version of each component developers are using, and how the components communicate with one another. Being good engineers, the IT ops pros draw the setup in Visio, everyone reviews and agrees, and ops manually configures each system according to that agreed-upon architecture.
Too often, once the enterprise rolls out the new app, it fails because of an error that developers say they never saw in testing, and the ops pros throw up their hands, saying they followed the agreed-upon architecture exactly. The developers go back and figure out what went wrong, and once the problem is fixed, everyone is told to not touch anything. Six months later, this process happens again when developers release an updated version.
Under DevOps, automation tools such as Puppet and Chef help alleviate this problem by requiring the developers themselves to describe the infrastructure configuration and have the tools automatically deploy the servers, set up Apache or Java with the proper versions, and configure firewall ports. As developers test the application during code iterations, they're doing so on the company's agreed-upon architecture, so when the app moves into production the process is usually much smoother.
Let's keep the big picture in mind, though. Although these tools are important, they're a means to an end -- an IT architecture that puts apps in end users' hands -- and it's the end architecture that must be defined, discussed, iterated on, designed, and then implemented within the tool. As one respondent to our survey said: "The key point to remember is DevOps is much more about culture than it is around tooling. ... Both DevOps and Agile borrow key concepts from lean manufacturing, so it's all about communication and openness."
The DevOps methodology forces the various IT teams into a more mature process that puts less trust on the individual team members to drive results and instead requires trust in repeatable and reliable processes for tasks such as virtual machine configuration.
Back to our example of server configuration: Under DevOps, IT wouldn't rely on a single engineer who "knows how it was set up before." Instead, IT uses a configuration that's documented, easily repeatable by developers running the tools in a lab, and, most important, validated by both development and IT ops. If the DevOps process works, there should be less finger-pointing, less reliance on tribal knowledge of how all the knobs and buttons must be set in the infrastructure, and more focus by team members on the end goal. That end goal is the IT architecture, and ultimately the experience that employees or customers get from the apps.
The Business of Going DigitalDigital business isn't about changing code; it's about changing what legacy sales, distribution, customer service, and product groups do in the new digital age. It's about bringing big data analytics, mobile, social, marketing automation, cloud computing, and the app economy together to launch new products and services. We're seeing new titles in this digital revolution, new responsibilities, new business models, and major shifts in technology spending.
Top IT Trends to Watch in Financial ServicesIT pros at banks, investment houses, insurance companies, and other financial services organizations are focused on a range of issues, from peer-to-peer lending to cybersecurity to performance, agility, and compliance. It all matters.
Join us for a roundup of the top stories on InformationWeek.com for the week of September 18, 2016. We'll be talking with the InformationWeek.com editors and correspondents who brought you the top stories of the week to get the "story behind the story."