What you don't know about the security of your information systems can hurt you and probably already has. But how much information about security flaws is too much? Anything you're told about a software vulnerability, the villains surely will pick up, too.
Most people who manage information security say bring it on, subscribing to the belief that the more details they have about vulnerabilities, the better prepared they will be. They count on not only the software vendors' official security advisories, but also on researchers who specialize in analyzing products for flaws.
That's where the controversy heats up. Many researchers bring serious flaws to light, but others are all too willing to cash in on their cleverness by posting information about software vulnerabilities before vendors have a chance to patch their products. This shameless self-promotion of being the first to expose a key vulnerability can bring fame and consulting contracts. Other firms readily open up their checkbooks to pay hackers for dirt on flaws, doling out premiums for the worst flaws--ones that, say, Microsoft ends up rating critical. These disclosures are followed closely by malicious hackers looking for cracks to exploit. They also force IT staffs to drop more strategic projects in order to plug holes in their systems before the next big worm or Trojan strikes.
It's when these researchers also hold elite vendors--like Microsoft and Oracle--accountable that they earn their keep. Software is more secure today in part because the threat of public embarrassment hangs over the vendors, says George Roettger, Internet security specialist for ISP NetLink Services, in an E-mail interview. Researchers also wield a lot of power, he says, because their information can be used by "the bottom-feeder hackers who don't know much and learn from every piece of information available."
Sadler wants to know: What do security companies stand to gain?
The hoopla about Windows Metafile led to some of the most frustrating days of Connie Sadler's career. "There were people predicting dire straits if we didn't do something," says the director of IT security at Brown University. "I had to put days aside from my regular job just to monitor different sites just in case something came up."
While the researchers kicked up dust, companies looking to defend themselves had few options. Microsoft's customers had to either wait for Microsoft's patch or install workaround code written by Ilfak Guilfanov, a senior developer at Belgian software maker DataRescue. Although Guilfanov's code got the approval of the SANS Institute's Internet Storm Center and security research firm F-Secure, it created a dilemma for people like Sadler. "If we ask our users to install that, it tells them it's OK to find and install a third-party patch, and it gave phishers an opportunity to exploit users," says Sadler, a former manager of global infrastructure security at Lockheed Martin.
Given the sensitive information these researchers traffic in, there's really only one rule for them to follow: responsible disclosure. This refers to giving software makers a chance to patch their products and users time to patch their systems before disclosing details about a vulnerability. Microsoft and many other vendors have E-mail addresses that researchers can use to report their findings.
But researchers know that there's a lot more attention to be had and money to be made by going straight to the Internet with information about vulnerabilities and so-called proof-of-concept exploit code that some hackers say is necessary to convince vendors of the urgency of fixing software bugs. This code also provides a malware template for less experienced hackers.
Microsoft isn't the only vendor on which the research community keeps a close eye. Apple's Mac iChat instant messaging service was hit in February by the OSX/Leap.a Trojan after an unknown user posted a link to this malware from the MacRumors.com site. Oracle likewise has a following of vulnerability hunters, who caused a stir around the company's January critical patch update by openly discussing flaws in its software and publishing controversial workaround code.
And don't forget Cisco Systems nemesis Michael Lynn, the former Internet Security Systems researcher who now works for Cisco competitor Juniper Networks. Lynn's July 2005 Black Hat conference presentation proved hackers could break into Cisco's Internetwork Operating System and take control of a company's network traffic. That presentation, more than any other software flaw "outing," put the spotlight on software security analyst practices. Lynn, who did his research on IOS as a member of ISS's X-force research arm, infuriated Cisco, which accused him of making its customers' networks more vulnerable. Lynn claimed that he was simply alerting Cisco customers that the unthinkable, a malicious hack into IOS, was possible.
Loose Lips ...
Just a few days prior to the createTextRange bug's debut, Dutch programmer Jeffrey van der Stad alerted Microsoft to an IE problem related to the way the browser processes HTML applications, or HTA files. Van der Stad had published detailed information about the vulnerability on his Web site but later pared back the information at Microsoft's request.
Security researchers walk a fine line between being a blessing and a bane to security. David Litchfield, managing director of Next Generation Security Software, in January took Oracle to task for a vulnerability in the company's Procedural Language extension to SQL and posted a brief description of the problem to the Bugtraq and Full Disclosure security mailing lists. The flaw, which Litchfield called critical, was in the Oracle PLSQL gateway and could let an attacker grab control of an Oracle database server via a compromised Web server.
Litchfield proceeded to post to Bugtraq, a mailing list owned by Symantec, a workaround that users could implement to keep Oracle's vulnerability from being exploited, but Oracle countered that these workarounds kept certain E-business apps from working properly. Litchfield defends his actions, saying he didn't create the problem, and disclosure lets people take responsibility for their own defense. "The worry is, if I say too much, everyone will be able to hack into [Oracle's software]," Litchfield says. "So you can't release exploit code. And if you fully disclose details of a vulnerability, some people can create their own exploit, so you can't do that. No matter how much or little you can disclose, you'll always have a problem."
Yet sometimes researchers such as Litchfield unwittingly aggravate a problem. While working as an IT consultant at a German bank in 2002, Litchfield came across a flaw in Microsoft's SQL Server database that caused it to crash, and he reported the problem to Microsoft. A week after Microsoft issued a patch to address the "heap overflow" and "stack overflow" vulnerabilities Litchfield found, he gave a presentation about the problem at a U.S. Black Hat conference and warned that it should be patched as soon as possible. "I made a clear warning at the time that this would be turned into a worm unless people patched," he says. "They didn't, and we ended up with Slammer."
Although Litchfield wasn't burned in effigy, he did learn a valuable lesson: Publishing the code wasn't the best way to inform people of the problem. Still, he's quick to point out that because of Slammer, "you never, or very rarely, come across an unpatched SQL Server, which is fantastic. Slammer helped change people's ideas and perspectives on patching and patch management."
Litchfield speaks with conviction. There's too much at stake for him to worry about ticking off vendors and users, he says. His Next Generation Security Software gives the U.K. National Infrastructure Security Co-ordination Centre a heads-up on software vulnerabilities so it can help protect organizations running critical IT infrastructure. In a post Litchfield made on Jan. 5 to Insecure.org, a site run by a programmer who calls himself Fyodor, he noted, "Much of the U.K. Critical National Infrastructure relies on Oracle; indeed, this is true for many other countries as well. I know that there's a lot of private information about me stored in Oracle databases out there. I have good reason, like most of us, to be concerned about Oracle security; I want Oracle to be secure because, in a very real way, it helps maintain my own personal security."
In January, Alexander Kornbrust, CEO of security research and consulting firm Red-Database-Security and a former employee of Oracle in Germany and Switzerland, reported that an Oracle security feature called transparent data encryption stored its master encryption key unencrypted in the system global area--Oracle's structural memory that aids the transfer of data between clients and an Oracle database. Kornbrust's conclusion: A skilled attacker or even a database administrator not versed in hacking techniques could retrieve the plain text master key, which would let that person decrypt all data encrypted using that key. Oracle says it addressed this problem in January's critical patch update.
Kornbrust insists the work that he and other security researchers perform is essential to improving the security of Oracle's software. "Oracle is doing a better job than before because more people are publishing Oracle security-related stuff," he says. "If I find security issues in Oracle products during my work, I normally report these issues to Oracle to make the product more secure."
The slip-up was particularly painful for Oracle, which typically is vague about bugs in its software. While it works with researchers to collect and react to vulnerabilities, it's wary about providing much information about vulnerabilities and patches, convinced it will help hackers. When it finds a vulnerability, it usually keeps quiet about it until it has a patch that works with Oracle apps. It has taken heat for not disclosing more information, such as workarounds that companies could implement on their own.
Kornbrust isn't concerned that by publishing information about Oracle vulnerabilities to the Web or in mailing lists, he's putting users at risk. "Most DBAs and security people are looking for additional information because Oracle advisories are quite vague," he says. Many IT executives back that notion. "I seriously consider all information valuable," says Dennis Brixius, VP and chief security officer at McGraw-Hill. "There are products being developed that aren't 100% secure products. We all understand that."
Yet security researchers run afoul of the users they pledge to protect, as well as other researchers, when they break with accepted protocol and post information that can be used to develop zero-day exploits for which there are no patches. Those firms generally are more interested in promoting themselves and their products than in security. "For some companies, vulnerabilities are the cheapest form of marketing," Kornbrust says.
Users also understand the researchers have a profit motive in wanting to be first to spot a flaw. When assessing these companies, you have to look at "how these companies stand to gain," Brown's Sadler says.
Enter the dodgy practice of paying for vulnerabilities, with price tags that can run $10,000. Through its Zero Day Initiative, 3Com's TippingPoint network security division pays freelance researchers for information about security flaws they discover in a variety of applications, not just those that 3Com or TippingPoint sell. The company's policy is that researchers must register with 3Com's Web site, and TippingPoint verifies the accuracy of the information, rates the severity of the flaw, and shares that information only with the vendor whose technology is affected until a patch is issued. In January, the Zero Day Initiative yielded information about a vulnerability in Excel, which Microsoft patched in March as part of its monthly Patch Tuesday download. Any dig at Microsoft, of course, provides publicity for TippingPoint's work, well worth the bounty for the information it receives.
TippingPoint launched Zero Day Initiative last August and has worked with more than 300 researchers in addition to its own 25 researchers. Freelancers access a portal to find out what TippingPoint is looking for and to submit their work for acceptance and payment. A researcher enters a vulnerability discovery into the portal, where the information is encrypted and delivered to TippingPoint researchers, who determine whether the flaw affects a technology widely deployed in their customers' environments.
Once TippingPoint evaluates the information submitted, it makes the independent researcher a financial offer based upon the value of the vulnerability to TippingPoint customers, though the company won't say how much it pays. Once a price is agreed upon, TippingPoint requires the researchers to send the company a copy of a government-issued identification and other information to verify their identity and collect their bounty.
Some say that bounties work because they attract security researchers whose work would otherwise be ignored by big software vendors. "If you get some researcher with a handle of 'xyoc,' will Microsoft talk to this guy?" says Ken Dunham, director of the rapid response team at iDefense, a security research group that VeriSign bought in July 2005. Through iDefense, VeriSign pays researchers who provide it with notice of unpublished vulnerabilities and exploit code. Created in 2002, the program taps nearly 300 independent researchers.
IDefense pays a bounty of $10,000 for any vulnerabilities in Microsoft products that Microsoft classifies as critical. Paying for flaws may sound unsavory, but iDefense created its program after its researchers saw Microsoft WMF vulnerabilities being sold for $4,000 on the black market. "We're providing an outlet," says Joseph Payne, iDefense's president and COO. "There's already an underground market for these vulnerabilities."
Microsoft isn't keen on a $10,000 bounty to find critical flaws in its products. "Our preferred method is working with the community directly rather than creating an economy for this work," says George Stathakopoulos, a senior director of security at Microsoft. Still, Microsoft hasn't asked iDefense to drop the incentive program.
Redmond Reaches Out
Microsoft's acceptance of the security research community has come a long way, including acknowledging the gray areas in which they must work. "The notion of good and evil is confusing in this space," Stathakopoulos admits. "Our job is to understand this community and promote responsible disclosure."
Microsoft hardly wants a repeat of its Windows 2000 Plug and Play problem, when ISS researchers last year found a flaw that let attackers take complete control of affected systems and remotely execute code, leading to the various incarnations of the Zotob worm.
Since March 2005, Microsoft has held invitation-only "BlueHat" security briefings twice a year, tapping into security researchers to help Microsoft programmers think like security researchers do when probing for weaknesses. "When you have a researcher who takes the assumptions you made and shows you how they can be exploited, that hits you in the gut," says Stephen Toulouse, security program manager with Microsoft's security technology unit.
Microsoft still has its conflicts with the community, including the researchers who issued alerts about the IE and WMF problems earlier this year before patches were available. Microsoft still has work to do to convince security researchers it will take threats seriously, and that researchers can get public credit if they go to Microsoft first.
Security consultant Kornbrust, a presenter at BlueHat events, says it's only by embracing the research community and its relentless probing that Microsoft's products are getting better.
Even some hard-core critics say Microsoft has earned some street cred. Security researcher Moore, who posted exploit code for the WMF flaw and has a tool on his site for testing how vulnerable systems are to intrusion, urges fellow researchers to cut Microsoft some slack on the March IE vulnerabilities. "Some of the new folks on the MS IE team are the same people who posted bugs to this list a couple years ago," wrote Moore, who has spoken at BlueHat events, in a posting on the Dailydave E-mail list.
There aren't clear guidelines, or an agreed-upon code of ethics, when it comes to disclosure. There have been calls for a set of standards that govern how researchers communicate their findings, and some researchers adhere to guidelines from the Organization for Internet Safety for security vulnerability reporting and response (www.oisafety.org). Those outline best practices for discovering and disclosing security vulnerabilities without putting companies in danger. But they don't define how much information is too much to disclose. As a result, TippingPoint doesn't adhere to the guidelines. "It's a complicated set of rules, and we tend to feel that on a case-by-case basis, you can deal with other product vendors," Endler says.
So the unwieldy quantity and quality of security research will continue to challenge IT security executives. Still, most security pros want more information. "Let's expose it," says Sadler of Brown University. "Yes, sometimes that backfires. But from a high level, it's a good thing. The folks who use this information to do damage are going to know about it long before us anyway." And she adds that tracking which vendors continually produce insecure products can't be a bad thing. Looks like security researchers aren't the only ones keeping score.
Illustration by Ryan Etter