Who's To Blame For Insecure Software? Maybe You

Some 57% of those attending the Gartner IT security summit keynote session believe that vulnerability labs set up by security researchers are a useful public service.

Larry Greenemeier, Contributor

June 5, 2007

5 Min Read
InformationWeek logo in a gray background | InformationWeek

The recent observation that companies buying software are unaware of 95% of the bugs contained therein places the well-worn argument about the value of security vulnerability research in a new light. Are security researchers, who spend much of their time finding flaws in others' programming efforts and are often the bane of software vendors, doing enough? And do software consumers escape blame for shoddy products put on the market?

Attendees at the Gartner IT security summit keynote session Tuesday responded to an instant poll indicating that most of them, 57% of the 340 people present, believe that vulnerability labs set up by security researchers are a useful public service, while 22%, or 75 people, think they're a distraction that forces them to patch more often.

Yet there's not consensus on how much information to disclose or when to disclose it. The discovery of a security vulnerability in a piece of software is in many ways like seeing that the front door to your neighbor's house has been left open, David Maynor, chief technology officer of Errata Security, said Tuesday. The options are calling the neighbor right away and alerting them to the open door, inspecting the neighbor's house (helping yourself to some of their food and trying on their clothes in the process) before calling them, or calling all of the other neighbors on the block to tell them about the neighbor's open door. An even more nefarious option is to close the neighbor's door but leave it unlocked so that the house can be entered some time in the future. In software terms, that pretty much sums up the spectrum that includes discrete disclosure of software vulnerabilities to the software's maker, full disclosure of the vulnerabilities to the public Internet, and no disclosure at all.

Different researchers take a different approach. Maynor, for example, says he gives the software vendor a month to fix its software vulnerability before he reports the flaw publicly. "We'll give you 30 days to fix a bug, that's it," he said. Thomas Ptacek, the principal of Matasano Security and a member of the Tuesday morning keynote panel, said he's willing to wait until the software maker makes its own decision to publicly disclose a vulnerability before he publishes his report.

One sentiment that's been floated is for software vendors, or internal software developers, to be held liable for flaws in their products that lead to intrusions into their customers' -- or their own -- networks and breaches of data found there. The concept of spending the time and money to write secure programs is a difficult one for company executives on the business side to accept, as it means possibly extending deadlines for deployment, lowering margins on products, or passing along the higher costs to customers. But it's worth it for companies to consider paying extra attention to the security of the programs they write, when you consider the cost of fixing a bug once an application is shipped and in use can be up to 100 times more expensive than identifying the problem during the development phase, Chris Wysopal, chief technology officer for Veracode and the third member of the Tuesday morning keynote panel, said.

There was no consensus as to how much money to spend on measures required to write more secure applications. Whereas Maynor believes that companies should consider spending as much as 25% of the cost of the total project on security, Ptacek puts the figure at closer to 10%. Maynor reasoned that internal applications must be secured to defend companies against insiders and intruders who are able penetrate a company's outer defenses.

Still, security researchers aren't so quick to heap all of the blame on software vendors. Those who've properly configured their firewalls and kept their software patches up to date are much less likely to become victim of a data breach, Maynor said. In April, Microsoft issued an advisory warning users of a vulnerability in its Domain Name System Server Service that potentially allowed an attacker to execute code remotely in Windows environments. Although an exploit was published to take advantage of this problem, "it was found that if you had your firewall properly configured, it wouldn't have been an issue," Maynor said.

Another Gartner poll taken during the session indicated that, when the Microsoft Windows Domain Name Service flaw was revealed in August, 2006, that 30% of attendees waited for Microsoft's patch, while 20% disabled the Remote Procedure Call, or RPC, management interface used by the DNS service. Only 7% tried a non-Microsoft patch. Most disturbing, 23% were unfamiliar with the DNS flaw.

Software consumers have a responsibility to test the security of the products they buy prior to implementation, the panel agreed. If more software vendors thought their customers did this, they'd be compelled to provide a higher quality product in the first place. Microsoft has gotten a lot of credit for improving both the security of its products and the process by which those products are patched, but it's not hard to figure out why. "Microsoft found religion because they knew that they had a lot of security researchers looking at their product," Ptacek said.

If market scrutiny can change the direction of a company the size of Microsoft, there's no reason other software vendors can't be made to fall in line.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights