Application quality is no longer just about functional adherence to requirements. Applications and production environments have become increasingly complex over the years and user expectations are higher than ever. Quality now means lots of things, more than some teams are addressing well or at all.
Most organizations operate in a highly competitive market, which requires them to place increasing emphasis on application quality. If you have a captive market (say, you're the only traditional energy company operating in a certain region), you may not be motivated to elevate a one-star rating because where else are people going to go except solar? That kind of attitude may be missing the bigger picture which is whether their application is helping them advance their business objectives.
"Historically, we thought about out wall. We write requirements, we design things, we write the code, we test it, and ship it. Now, everything around me is changing. I'm running in the cloud, I'm calling third-party APIs, there are so many variables and pieces that I live in a very dynamic world," said Thomas Murphy, senior director analyst at Gartner. "I have to check usability, I have to check performance, I have to understand at runtime what's going on. Are there features nobody's using because we missed the mark or they're not very usable? We need to fix that if the function is important to us from a business standpoint."
Looking at quality holistically
Nancy Kastl, executive director of testing services at digital transformation agency SPR said while software testing practices focus on correctness including adherence to functional requirements, accuracy of calculations used, correct navigation, software performanceand scalability, other quality attributes can be overlooked.
"Relevant quality attributes should be embedded in the [software requirements], built into the product during architecture design and coding activities and independently verified as satisfied through various methods," said Kastl.
Some quality aspects in her opinion include the following:
How to assess where you are and where you need to go
One of the things that holds companies back is testing as usual. That is, adhering to a bygone mindset when the rules of application quality have shifted dramatically. Some consultants can quickly assess the state of application quality practices just by listening to the words people are using. For example, is testing an event or is it continuous? Has the application development or IT leadership current or behind in their understanding of what code quality is and how to achieve it? Is security testing just a vulnerability check late in the cycle or are application vulnerabilities and threat modeling addressed earlier and throughout the lifecycle?
Tooling also can be a barrier to progress. For instance, UI testing tools have improved a lot over the years, but what's the status of infrastructure testing tools? Can DevOps and platform engineers appropriately test the infrastructure they're building or not?
One thing Gartner's Murphy has been concerned about is whether he's able to assess the root cause of quality problems.
"I want to understand when do I create bugs? When do I find them? When do I fix them?" said Murphy. "If I notice we create a lot of a certain type of bug, I want to quit creating them in the first place, so it an education thing? A style or a practice? Those things help me understand where to focus."
Honest assessments of maturity can be difficult to achieve when it's not exactly clear what a maturity model might look like, let alone where the company falls on the continuum. Consultants can help here by providing objective assessments of where a team or organization is at relative to other organizations in the same or similar industry, help translate business goals into quality imperatives and also help construct a road map for improving application quality over time.
Software quality has become both a brand issue and strategic issue, and the topic continues to become more complex. Fundamentally, organizations need to embrace an ethos of continuous quality that spans the SDLC and reflects modern application architectures and the ecosystems in which they run.
For more on software quality and performance testing:
Lisa Morgan is a freelance writer who covers big data and BI for InformationWeek. She has contributed articles, reports, and other types of content to various publications and sites ranging from SD Times to the Economist Intelligent Unit. Frequent areas of coverage include ... View Full Bio