Shameless self-promoters? Fearmongers? Sure, security researchers aren't always model citizens, but business technology pros want them on the job.

Larry Greenemeier, Contributor

April 14, 2006

3 Min Read

Although Microsoft has issued security bulletins since 1998, in 2001 it began acknowledging in these bulletins the security researchers who found vulnerabilities and reported them to the company. The following year, Microsoft attended a Black Hat conference and threw a big party for these contributors. It was part of a change of thinking at Microsoft that it would work with the independent security research community. Says Stathakopoulos, "It was a big wake-up call that this community does care about security."

Since March 2005, Microsoft has held invitation-only "BlueHat" security briefings twice a year, tapping into security researchers to help Microsoft programmers think like security researchers do when probing for weaknesses. "When you have a researcher who takes the assumptions you made and shows you how they can be exploited, that hits you in the gut," says Stephen Toulouse, security program manager with Microsoft's security technology unit.

Microsoft still has its conflicts with the community, including the researchers who issued alerts about the IE and WMF problems earlier this year before patches were available. Microsoft still has work to do to convince security researchers it will take threats seriously, and that researchers can get public credit if they go to Microsoft first.

Security consultant Kornbrust, a presenter at BlueHat events, says it's only by embracing the research community and its relentless probing that Microsoft's products are getting better.

Even some hard-core critics say Microsoft has earned some street cred. Security researcher Moore, who posted exploit code for the WMF flaw and has a tool on his site for testing how vulnerable systems are to intrusion, urges fellow researchers to cut Microsoft some slack on the March IE vulnerabilities. "Some of the new folks on the MS IE team are the same people who posted bugs to this list a couple years ago," wrote Moore, who has spoken at BlueHat events, in a posting on the Dailydave E-mail list.

Keeping Score

There aren't clear guidelines, or an agreed-upon code of ethics, when it comes to disclosure. There have been calls for a set of standards that govern how researchers communicate their findings, and some researchers adhere to guidelines from the Organization for Internet Safety for security vulnerability reporting and response (www.oisafety.org). Those outline best practices for discovering and disclosing security vulnerabilities without putting companies in danger. But they don't define how much information is too much to disclose. As a result, TippingPoint doesn't adhere to the guidelines. "It's a complicated set of rules, and we tend to feel that on a case-by-case basis, you can deal with other product vendors," Endler says.

So the unwieldy quantity and quality of security research will continue to challenge IT security executives. Still, most security pros want more information. "Let's expose it," says Sadler of Brown University. "Yes, sometimes that backfires. But from a high level, it's a good thing. The folks who use this information to do damage are going to know about it long before us anyway." And she adds that tracking which vendors continually produce insecure products can't be a bad thing. Looks like security researchers aren't the only ones keeping score.

Continue to the sidebars:
10 Infamous Moments In Security Research and
Avoid Alert Overload

Illustration by Ryan Etter

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights