The software delivery cadence has continued to accelerate with the rise of Agile, DevOps and continuous processes including Continuous Delivery and Continuous Deployment. The race is on to deliver software ever faster using fewer resources. Meanwhile, for competitive reasons, organizations don't want to sacrifice quality in theory, but sometimes they do in practice.
Recognizing the need for speed and quality, more types of testing have continued to "shift left." Traditionally, developers have always been responsible for unit testing to ensure the software meets functional expectations, but today, more of them are testing for other things, including performance and security. The benefit of the shift-left movement is the ability to catch software flaws and vulnerabilities earlier in the lifecycle when they're faster, easier and cheaper to fix. That's not to say that more exhaustive testing shouldn't be done; shift-left testing just ensures that fewer defects and vulnerabilities make their way downstream.
Enter AI. More developers are including artificial intelligence in their applications, and they're also using more AI-powered tools to do their work. Granted, not all forms of AI are equally complex or intelligent; however, the level of intelligence embedded in products continues to increase. The danger is that developers and software development tool vendors are racing to implement AI without necessarily understanding what it is they're implementing or the associated risks.
"In my first foray into applied AI, we had to consider the implications of interfacing to triply-redundant flight control systems and weapons that kill," said Gregg Gunsch, a retired US Air Force lieutenant colonel and retired college professor with over 20 years of experience teaching and leading research in applied artificial intelligence and machine learning, information security for computer science/engineering majors and digital forensic science. "That tended to instill a strong 'seriously test before release' attitude."
Not every developer is building software with life-and-death consequences, but many are building applications and firmware that can have material impacts on end users, the enterprise, customers, partners, governments and more. Given that some forms of AI may yield unpredictable results because of the way they're designed or because there are flaws or bias in the data, the question is whether the ongoing quest for ever-faster software delivery is practical, and if it is, whether it's wise.
"I get concerned about putting guardrails in now, or we may miss what's happening and then realize where the bots went wrong," said Scott Likens, new services and emerging tech leader at PwC.
Value drives the need for speed
Part of the continuous process mantra is delivering value quickly to consumers for competitive reasons. However, quick execution and an "innovation at any cost" mentality also produces broken user experiences and functional gaffes that end users would happily trade for better quality software that's delivered less frequently.
"I am very tired of being an uninformed beta test subject, but I recognize that crowdsourcing of some kinds [must] happen to collect the data necessary for training the systems and steering development. Rapid-prototyping around the user is a key tool in design engineering," said Gunsch. "Sometimes, there may not be other good ways to collect the massive amounts [of data] needed for learning systems besides just experimenting on the entire user population."
Attitudes about speed-quality tradeoffs differ around the world. According to PwC's Likens, speed trumps quality in China, but the same is not true in the U.S.
"We had the social media wave where consumers wanted that instant change, but now they're almost revolting against how often things change," said Likens.
User attitudes also vary based on the nature of the application itself. For example, consumers expect banking applications to be reliable and secure, but they don't have the same expectations of a social media selfie app.
"You've had data breaches and data leakage and now consumers are willing to accept less to be protected," said Likens. "You can't innovate at all costs for core enterprise [or core consumer] apps."
Will AI help or hinder software delivery speed?
The potential risks of self-learning AI seem to indicate that trust should be included in shift-left practices and software development processes in general. While it may add yet another factor to consider earlier in the software development lifecycle (SDLC), embedding trust into processes would help ensure that this new element is executed efficiently. In fact, AI may be part of the solution that ensures that trust is not only contemplated but validated and verified.
Already, AI is being used in parts of the SDLC, such as automated software testing tools that use AI to ensure better test coverage and to prioritize what needs to be tested. It's also driving higher levels of efficiency by enabling more tests to be run in shorter timeframes.
According to Likens, machine learning and computer vision can produce effective UI designs because the system can look at far more permutations than a human could and generate code from it.
"Now you're seeing AI is generating code at a level that's human-usable. A lot of stuff we do on the UI we can do at a high quality level because we feed in unbiased training data and hand-drawings, something that machine learning vision can recognize as something that looks good," said Likens.
Not all aspects of software development and delivery have been automated using AI yet, but more will be automated over time as tools become more sophisticated and software development practices continue to evolve. AI can help accelerate software delivery, but its application will be more valuable if that speed can be matched with elements of quality which include security and trust.
For more on the trends in software development and AI, check out these recent articles.