Get Software Quality Right

We've known the basics for years but need to apply them.

InformationWeek Staff, Contributor

June 25, 2010

4 Min Read
InformationWeek logo in a gray background | InformationWeek

Software quality suffers as application size increases. We've known that since the 1970s but still haven't solved the problem. The National Institute of Standards and Technology found in a 2002 study that more than 60% of manufacturing companies reported major defects in software they bought, with just under 80% reporting minor defects.

Part of the problem involves how and what you measure. A common approach is the lines-of-code metric (referred to as KLOC, for thousands of lines of code), but it ignores important stages of the software development life cycle such as requirements and design. IBM dealt with this shortcoming back in the '70s by developing two metrics to gauge software quality: defect potentials and defect removal efficiency. Both are still highly relevant today and have had the greatest impact on software quality, costs, and schedules of any measures.

Defect potentials are the probable numbers of defects that will be found in various stages of development, including requirements, design, coding, documentation, and "bad fixes" (new bugs introduced when repairing older ones). Defect removal efficiency is the percentage of the defect potentials that will be removed before an application is delivered to users.

Defect potentials can be measured with function points, units of measurement that express the amount of business functionality an information system provides to users. Function points don't measure the number of lines of code because most serious defects aren't found in the code but instead occur in the application's requirements and design (see story, p. 28).

The range of defect potentials typically scales from just less than two per function point to about 10. Defect potential correlates to application size: As size increases, defect potential rises. It also varies with the type of software, CMMI levels, development methodology, and other factors.

Comparing the quality from different software methodologies is complicated. However, if the applications being compared are of similar size and if they use the same programming languages, then it's possible to compare quality, productivity, schedules, and other areas. The table below compares the defect potentials and removal efficiencies of several software development methodologies. Putting aside the poor results associated with CMMI level 1 groups, all of them perform well.

If you have a scientific calculator handy, take the size of the application in function points and raise it to the 1.25 power. The result is the approximate number of defects that will occur. Try this for 10, 100, 1000, and 10,000 function points, and you can see that as code gets bigger, you get dramatically more functional defects. (While you have your calculator out, raise the size to the 1.2 power to see how many test cases you'll need. If you raise size to the 0.4 power, you get the number of months it will take. Divide size by 150, and you get how many people are needed on the development team.)

Defect Removal Efficiency In Depth

The U.S. average for defect removal efficiency is only 85%, based on my research on about 13,000 development projects. The primary cause for projects running late or having cost overruns is excessive numbers of defects that aren't discovered or removed until testing starts. Such projects appear to be on schedule and within budget--until testing begins. Then, hundreds or even thousands of latent defects are discovered causing delays and cost overruns, and the test schedule ends up far exceeding the original plans.

Most software testing averages about 35% to 50% in defect removal efficiency levels. As application size increases, test coverage and test removal efficiency drop. This suggests that additional quality control methods such as inspections are needed.

Formal inspections of requirements, design, and source code have been in use since IBM began looking for better quality control methods in the '70s. With inspections, defect removal efficiency levels spike higher than 85% and testing defect removal efficiency goes up by about 5% per test stage. More recently, static analysis tools used prior to testing were found to contribute to high levels of defect removal efficiency (in the 85% range), although not against dynamic problems such as performance. (See table above for defect removal efficiency levels for various stages.)

While defect potentials and defect removal efficiency are the most effective ways of evaluating software quality controls, actually improving software quality requires two process improvements: defect prevention and defect removal.

Defect prevention refers to technologies and methodologies that lower defect potentials or reduce the numbers of bugs that must be eliminated. Examples of defect prevention methods include joint application design, quality function deployment, Six Sigma, structured design, and participation in formal inspections. For its part, defect removal refers to methods that can either raise the efficiency levels of specific forms of testing or raise the overall cumulative removal efficiency by adding other reviews or tests. The two approaches can be implemented at the same time.

To achieve a cumulative defect removal efficiency of 95%, it's necessary to apply at least nine defect removal activities in sequence:

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights