Software // Enterprise Applications
News
6/24/2010
01:10 PM
Connect Directly
RSS
E-Mail
50%
50%

Get Software Quality Right

We've known the basics for years but need to apply them.

Software quality suffers as application size increases. We've known that since the 1970s but still haven't solved the problem. The National Institute of Standards and Technology found in a 2002 study that more than 60% of manufacturing companies reported major defects in software they bought, with just under 80% reporting minor defects.

Part of the problem involves how and what you measure. A common approach is the lines-of-code metric (referred to as KLOC, for thousands of lines of code), but it ignores important stages of the software development life cycle such as requirements and design. IBM dealt with this shortcoming back in the '70s by developing two metrics to gauge software quality: defect potentials and defect removal efficiency. Both are still highly relevant today and have had the greatest impact on software quality, costs, and schedules of any measures.

Defect potentials are the probable numbers of defects that will be found in various stages of development, including requirements, design, coding, documentation, and "bad fixes" (new bugs introduced when repairing older ones). Defect removal efficiency is the percentage of the defect potentials that will be removed before an application is delivered to users.

Defect potentials can be measured with function points, units of measurement that express the amount of business functionality an information system provides to users. Function points don't measure the number of lines of code because most serious defects aren't found in the code but instead occur in the application's requirements and design (see story, p. 28).

The range of defect potentials typically scales from just less than two per function point to about 10. Defect potential correlates to application size: As size increases, defect potential rises. It also varies with the type of software, CMMI levels, development methodology, and other factors.

Comparing the quality from different software methodologies is complicated. However, if the applications being compared are of similar size and if they use the same programming languages, then it's possible to compare quality, productivity, schedules, and other areas. The table below compares the defect potentials and removal efficiencies of several software development methodologies. Putting aside the poor results associated with CMMI level 1 groups, all of them perform well.

If you have a scientific calculator handy, take the size of the application in function points and raise it to the 1.25 power. The result is the approximate number of defects that will occur. Try this for 10, 100, 1000, and 10,000 function points, and you can see that as code gets bigger, you get dramatically more functional defects. (While you have your calculator out, raise the size to the 1.2 power to see how many test cases you'll need. If you raise size to the 0.4 power, you get the number of months it will take. Divide size by 150, and you get how many people are needed on the development team.)

Defect Removal Efficiency In Depth

The U.S. average for defect removal efficiency is only 85%, based on my research on about 13,000 development projects. The primary cause for projects running late or having cost overruns is excessive numbers of defects that aren't discovered or removed until testing starts. Such projects appear to be on schedule and within budget--until testing begins. Then, hundreds or even thousands of latent defects are discovered causing delays and cost overruns, and the test schedule ends up far exceeding the original plans.

Most software testing averages about 35% to 50% in defect removal efficiency levels. As application size increases, test coverage and test removal efficiency drop. This suggests that additional quality control methods such as inspections are needed.

Formal inspections of requirements, design, and source code have been in use since IBM began looking for better quality control methods in the '70s. With inspections, defect removal efficiency levels spike higher than 85% and testing defect removal efficiency goes up by about 5% per test stage. More recently, static analysis tools used prior to testing were found to contribute to high levels of defect removal efficiency (in the 85% range), although not against dynamic problems such as performance. (See table above for defect removal efficiency levels for various stages.)

While defect potentials and defect removal efficiency are the most effective ways of evaluating software quality controls, actually improving software quality requires two process improvements: defect prevention and defect removal.

Defect prevention refers to technologies and methodologies that lower defect potentials or reduce the numbers of bugs that must be eliminated. Examples of defect prevention methods include joint application design, quality function deployment, Six Sigma, structured design, and participation in formal inspections. For its part, defect removal refers to methods that can either raise the efficiency levels of specific forms of testing or raise the overall cumulative removal efficiency by adding other reviews or tests. The two approaches can be implemented at the same time.

To achieve a cumulative defect removal efficiency of 95%, it's necessary to apply at least nine defect removal activities in sequence:

1. Design inspections

2. Code inspections

3. Automated static analysis

4. Unit test

5. New function test

6. Regression test

7. Performance test

8. System test

9. External beta test

Requirements inspections, test case inspections, and specialized forms of testing (such as human factors, performance, and security testing) add to defect removal efficiency levels.

Who's Using These Metrics?

Since defect potentials and defect removal efficiency metrics are among the easiest to use and most effective techniques for improving software quality, you have to wonder why everyone isn't using them. My research has shown that companies most likely to use these metrics are ones that make computers and other complex hardware devices, such as telecommunication, aerospace, embedded equipment, and defense contracting companies. Many of them are topping 95% in defect removal efficiency, compared with an industry average of 82%.

Dovel Technologies, a software developer and system integrator that builds IT systems for government and private industry, reported in 2009 a 96% defect removal efficiency, which it credits to the adoption of defect potentials and defect removal efficiency metrics, along with close monitoring throughout the development life cycle using formal and informal reviews, among other approaches.

Companies that have adopted these metrics have cut their development and maintenance costs as well. When you have to rework defective requirements, design, and code, it can consume as much as 50% of the total cost of software and development.

What It All Means

Combining inspections, static analysis, and testing is cheaper than testing by itself and leads to much better defect removal efficiency levels. In concert, these approaches also shorten development schedules by more than 45% because, when testing starts after inspections, almost 85% of the defects already will have been addressed.

To measure defect potentials, it's necessary to keep good records of all defects found during development. When IBM applied formal inspections to a large database project, delivered defects were reduced by more than 50% compared with previous releases, according to my research, and the schedule was shortened by about 15%. Testing was reduced from three shifts over 60 days to one shift over 40 days. Most importantly, customer satisfaction improved to "good" compared with prior releases in which customers rated "very poor" satisfaction levels.

Cumulative defect removal efficiency was raised from about 80% to just over 95% as a result of using formal design and code inspections, and maintenance costs came down by more than 45% for the first year of deployment. Those are the kind of results that speak for themselves.

1. Defect potentials Make sure they stay below three per function point

2. Defect removal efficiency Keep it above 95%

3. Quality metrics Apply them from requirements through maintenance

Read all about software development at Dr. Dobb's: drdobbs.com

An HTML5 Primer It's easy to get lost in the welter of HTML5-related standards informationweek.com/1272/ddj/html5

Pointers In Objective-C Mastering Objective-C is the key to unlocking the iPhone's potential informationweek.com/1272/ddj/objectivec

Capers Jones is the founder and former chairman of Software Productivity Research, a software development consultancy. Write to us at iweekletters@techweb.com

Comment  | 
Print  | 
More Insights
Building A Mobile Business Mindset
Building A Mobile Business Mindset
Among 688 respondents, 46% have deployed mobile apps, with an additional 24% planning to in the next year. Soon all apps will look like mobile apps – and it's past time for those with no plans to get cracking.
Register for InformationWeek Newsletters
White Papers
Current Issue
InformationWeek Tech Digest Septermber 14, 2014
It doesn't matter whether your e-commerce D-Day is Black Friday, tax day, or some random Thursday when a post goes viral. Your websites need to be ready.
Flash Poll
Video
Slideshows
Twitter Feed
InformationWeek Radio
Sponsored Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.