IT teams often coast on "feel-good" technology diagnostics that don't affect the bottom line.
Being on the vendor side of our industry, I've worked with a variety of sales folks -- some good, some bad, and some just plain weird.
A career in sales can be financially rewarding, but these people live the tough life, too. They must reach targets, build pipelines, and navigate dreaded sales forecast meetings.
I remember one forecasting session where a colleague in sales was being questioned about an opportunity that had stalled. He explained that although he hadn't yet made the sale, he had a great relationship with the customer. I’ll never forget his manager's response: "You can only ever claim to have a good relationship when the client buys from you – at the moment, you only have an acquaintance."
That may sound harsh, but it's valuable advice – especially when you think about how we measure the effectiveness of DevOps. DevOps often starts out on a poor footing because teams have traditionally gauged success based on "feel-good" technology diagnostics, and they're allowed to get away with it (unlike in sales).
Many of the feel-good things we measure in IT are similar to what a novice sales person would ask in customer meetings.
For example, reporting system availability through a series of green lights (CPU, memory, and network uptime) is like a tech salesperson asking a customer how many servers it has in its datacenter and stopping there. In both cases, the answer isn't necessarily useful. If availability is 99% we can do technical victory laps. But what's the bottom line to the business? Similarly, if a customer answers that it has 42 servers, then apart from answering the meaning of life (forgive me, Hitchhiker's Guide fans), what has the salesperson really learned about the client's business challenges?
There are other examples where tech diagnostics provide limited value. Reporting page views for a web system indicates online activity, but it affords no insight into whether the traffic generated revenues or profit. Similarly, reporting the number of lines of code produced by each developer isn't the best indicator of productivity, because there are too many other variables (type of code, complexity of application, scope of project, etc.).
Experienced sales people use situational questions for context but move quickly to questions that help identify real customer problems. In IT, we can adopt a similar approach by aggregating many situational diagnostics for more complete insight into application performance. The overall health of a customer mobile app, for instance, should factor in app performance, back-end system availability, end-to-end transaction trip times, and network latency.
With that information, we can prioritize problem triage and make anticipatory improvements. Plus we've also established a better linkage between the technology and the business. Instead of making technology acquaintances with more green-light diagnostics, we're building relationships from practicable metrics.
Top sales folks don't stop with problem questions; they help customers understand the implications if problems remain unchecked. With DevOps, we should have the same goal, by gaining service-level visibility and by using advanced metrics that expose persistent technical and organizational issues that work against the business. For example, if it's understood that x-number of releases will be needed each week to support a new business process, no amount of development and QA smarts will help if operations have been traditionally rewarded for delaying application releases (which I've seen myself).
The Holy Grail in sales occurs when customers agree there's a big payoff from acquiring your product: more customers, revenue, and profits. Similarly, DevOps efforts should continuously measure and demonstrate the business value of the things we make and operate, and report results the business really cares about, such as speeding time-to-value, increasing customer capture, and preventing churn. But even at this late stage, technology feel-good diagnostics -- like celebrating increased mobile app downloads without looking at sales conversation rates -- can creep back in and derail your efforts.
Worse still, while we're all doing high-fives, existing customers might hate the new release, give it a one-star rating, and end their "relationship" with your business.
The right DevOps metrics drive better business results, much the way smart questions progress a sale. And, as in sales, expectations are always increasing, so your DevOps team needs to constantly improve what they measure and how.
Managing the interdependency between software and infrastructure is a thorny challenge. Enter DevOps, a methodology aimed at increasing collaboration and communication between these groups while minimizing code flaws. Should security teams worry -- or rejoice? Get the DevOps' Impact On Application Security report today (registration required).
Peter Waterhouse is a senior technical marketing advisor for CA Technologies' strategic alliance, service providers, cloud, and industry solutions businesses. View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Cybersecurity Strategies for the Digital EraAt its core, digital business relies on strong security practices. In addition, leveraging security intelligence and integrating security with operations and developer teams can help organizations push the boundaries of innovation.