News
5/27/2005
11:30 AM
Connect Directly
Twitter
RSS
E-Mail

Tools Help Squash Bugs

Security features in software-testing products can highlight vulnerable areas of already-developed code



Even as the front end of application development becomes more automated, a challenge remains on the back end of the process, where the code undergoes testing.

Testing is still often a manual process, one that's done only after the coding is completed and if there's enough time for it. Yet the expense of projects is reduced when bugs and errors are caught early. "Everyone recognizes that testing as early in the development life cycle as possible results in savings," even if they don't do it, says Paul Zorfass, an IDC software-development analyst.

A big area of concern for application project managers is security, and several specialized products have come on to the market to examine code for security holes. Agitar Software's Agitator, Fortify Software's Application Risk Analyzer, LogicLibrary's LogicScan, and Parasoft's JTest and C++Test all have new security features that can highlight vulnerable areas of already-developed code.

At Financial Engines Inc., an administrator of corporate 401(k) plans, it's essential that the company bring new services online as fast as possible to give its customers' employees choices in their retirement plans. What's also essential is that those applications contain no back doors or other exposures that might admit hackers, says Garry Hallee, executive VP of technology. "Our reputation as a 401(k) adviser would be greatly diminished if people thought we were unable to keep our customer data secure," he says.


The human eye isn't as good as an automated tool, Hallee says.

The human eye isn't as good as an automated tool, Hallee says.
At the end of each day's coding, the development team creates a new build--or composite assembly of source code--of a project, even though it remains a work in progress. Then Fortify Software's Application Risk Analyzer is run against it. The scan detects problems as they occur, rather than finding them in a security review at the end of project--or worse, in an outside security audit a year later, Hallee says.

Financial Engines' applications amount to 2 million lines of source code. No matter how hard the human eye tries to close all exposures, it's not as good as an automated tool, Hallee says. "We've done a lot to educate the team, but they can't do as comprehensive an analysis" as an automated tool, he says. "We find problems a lot earlier." And finding problems earlier is the goal. "It's our job to safeguard people's data. That's our whole business. We can't afford to have a security vulnerability," he says.

Jayson Minard, CIO of Abebooks Inc., a $130 million-a-year online used-book seller and supplier to Amazon.com Inc., found a sizable code problem in a project that was thought to be close to completion. When the application was run through Agitar Software's Agitator, an exception appeared that said one of the rules behind the app's currency-conversion engine was being violated. That rule said that a value in one country's currency, such as the British pound, could not be equal to the converted value in Canadian or American dollars or any other currency, but Agitator was showing instances where the software was yielding such a result.

If the code had gone into production, the mistaken conversions would have cost Abebooks, which deals with booksellers internationally, an estimated $200,000 in the software's first month of operation, Minard says.



The problem was hidden in the bowels of Abebooks' infrastructure. In building its customer database, Abebooks had tapped a database from an acquired company that lacked country-of-origin data for customers. Without a country of origin, the currency-conversion engine was listing the initial value as both the original and converted value. Abebooks' own customer database always presented a country of origin.

That find, discovered during the evaluation phase on Agitator, justified the expense of buying the product, and it was purchased and implemented. An entry-level, 10-seat deployment of Agitator 2.0 is priced at $4,000 per seat. The tool reviews software by generating every test case it can conceive of and running it against the code. It initiates tests that developers often don't think of. No one thought to test what would happen in the currency engine if no country of origin were presented, Minard says, because the development team assumed its software would never encounter such a case.

Some organizations don't need automated testing so much for their own development teams as for code being brought in from the outside. The 406-bed Children's Medical Center Dallas is one of the largest pediatric hospitals in the country. Its core system is Cerner Corp.'s Millennium hospital information system, with many other applications working alongside it. There are patient-registration, billing, clinical, and pharmaceutical systems, all with critical dependencies between them and the core system. At Children's Medical Center, for example, the pharmaceutical system must recognize that the patient is a child and not prescribe an adult dosage, says Alan Allred, group manager of information services.

Out of an IT staff of 140, 80 members are analysts who are well acquainted with the hospital's policies and procedures and who frequently consult with 105 users. For each new application, updated app, or software patch, these 185 analysts and users review what it's supposed to do and write a test plan for how it needs to work. Software testers then convert the plan into tests for the code and run them.

Until the fall of 2003, this was largely a manual process based on scripts captured in Excel spreadsheets and printed out for the testers. When the tests came back, they included the handwritten notes of the testers, deciphering the results on how the software fared, Allred recounts. Analysts then reviewed the results, recommended changes to the software suppliers, and came up with another round of testing to see whether the refurbished code could pass muster in a follow-up review. A massive paper archive was established for the test results on each new piece of code added to the hospital's systems.

Then, 21 months ago, the hospital decided to centralize both the test-script creation and test results in a repository called Test Director, supplied by Mercury Interactive Corp. "We can track the total life cycle of a defect until we know it's fixed," Allred says. "We have an audit trail of dates, comments, who did what, and when it was fixed."

Any change in software having to do with a patient coming into the hospital for cardiology tests, for example, threatens to disrupt 1,124 software steps that must occur to admit that patient and prepare him or her for testing. Those steps are scripted and tested for, with the results visible in Test Director, rather than scattered across hundreds of Excel spreadsheets. The result is a more structured and focused testing process, system analyst Don Ingerson says.

"Before, we didn't know for sure when we had closure" on a new piece of software, Allred says. "The confidence with which we deliver a thoroughly tested product has risen tenfold."

Continue to the sidebar:
Cold Code: Chain Tests Outside Apps

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Email This  | 
Print  | 
RSS
More Insights
Copyright © 2019 UBM Electronics, A UBM company, All rights reserved. Privacy Policy | Terms of Service