Assessing AI's Impact on Developers and the Bottom Line
There’s a growing need to develop systems to measure productivity, especially with the introduction of AI in software development.
Measuring developer productivity has been a challenge long before the introduction of AI-powered developer tools. A recent study found that most C-level executives believe measuring developer productivity can help their business, but 51% reported that their current methods for measuring developer productivity are flawed, and 45% said that they don’t measure developer productivity against business outcomes. These reports indicate a growing need to develop systems to measure productivity, especially with the introduction of AI in software development.
AI-powered coding tools allow developers to focus on more strategic work by freeing them from the drudgery of performing repetitive tasks. However, organizations need help accurately gauging the impact of AI-enabled software development. To successfully measure the effects of AI, DevSecOps teams must reevaluate the traditional metrics they’ve used to measure developer output to ensure that their investments are helping to drive business outcomes.
Legacy Metrics Are Inadequate
Reporting on the productivity gains of AI demands a nuanced approach that goes beyond lines of code produced, the number of code commits, or task completion. It requires a shift to evaluating real-world business outcomes that balance development speed, software quality, and security. Last year, McKinsey & Company described developer productivity measurement as a “black box,” noting that in software development, “the link between inputs and outputs is considerably less clear” than other functions.
Although using AI to produce more code faster can be beneficial, it can also lead to technical debt if the resulting code isn’t high quality and secure. AI-generated code often requires more time to review, test, and maintain. For example, developers may save time using AI to write code, but it will likely be spent later in the software development lifecycle. Additionally, any security flaws in AI-generated code will require engagement from security teams and additional time to mitigate potential security incidents.
When assessing the value AI brings to software development, it’s essential to consider that AI should be implemented and evaluated as a supplement to human developers, not a replacement.
Measuring Quality, Not Quantity
Instead of focusing on acceptance rates or lines of code generated, organizations should aim for a more holistic view of AI’s impact on productivity and their bottom line. This approach ensures that the actual benefits of AI-aided software development are fully realized and appreciated.
The best approach involves merging quantitative data from throughout the software development lifecycle (SDLC) with qualitative insights from developers regarding the real impact of AI on their daily work and its influence on long-term development strategies. For example, developers spend about 75% of their time on tasks other than code generation, which means that a more productive use of AI could enable developers to spend less time reviewing, testing, and maintaining code.
Additionally, teams should consider utilizing value stream analytics to evaluate the complete workflow from concept to production. Value stream analytics does not rely on a solitary metric; it continuously monitors metrics such as lead time, cycle time, deployment frequency, and production defects. This approach maintains a focus on business results rather than developer actions.
One recommended technique for measurement is the DORA framework, which looks at a development team’s performance over a specific timeframe. DORA metrics measure deployment frequency, lead time for changes, mean time to restore, change failure rate, and reliability to provide visibility into a team’s agility, operational efficiency, and velocity as a proxy for how well an engineering organization balances speed, quality, and security.
Implementing AI Sustainably
AI is still a new technology, and organizations should anticipate typical growing pains with the transition while recognizing that development and security teams may still need to trust AI. Introducing new AI tools to an existing workflow can require additional process changes, such as code reviews, testing, and documentation.
To begin, teams should build best practices by working in a lower-risk segment before expanding their AI applications to ensure they scale safely and sustainably. For example, AI code generation helps produce scaffolding, test generation, syntax corrections, and documentation. This way, teams can build momentum and motivation by seeing better results and learning to use the tool more effectively. Initially, productivity may decline as teams adjust to these new workflows. Organizations should give their teams a grace period to determine how AI best fits their processes.
AI-coding tools offer the potential to transform developers’ daily workflows. However, the business outcomes of those investments will quantify their long-term sustainability.
By embracing a holistic approach to productivity measurement that analyzes real-world outcomes, teams can prove AI's value to DevSecOps and, more broadly, the organization.
About the Author
You May Also Like
2024 InformationWeek US IT Salary Report
May 29, 20242022 State of ITOps and SecOps
Jun 21, 2022