Full DisclosureFull Disclosure
Are security software vendors trying to keep systems safe from threats such as Code Red, or are they more worried about self-promotion?
August 3, 2001
Marc Maiffret won't be mistaken for one of the heavyweights of the IT industry. But there was his name last week, right alongside Microsoft's, in the debate over their roles in one of the most serious threats yet to Internet security--a malicious computer program called Code Red.
Maiffret is learning that with the spotlight comes a certain amount of heat. "I just got finished talking with the FBI," he says from his office in Aliso Viejo, Calif., where he goes by the title of chief hacking officer for security software company eEye Digital Security. Maiffret's company discovered the flaw in Microsoft's server software, Internet Information Services, that Code Red exploited. So, as the FBI talked to Maiffret to learn about Code Red's origins, his peers in the IT industry were questioning why eEye published information about the flaw that was so detailed it could have made a hacker's task easier.That flaw has let Code Red infect more than 350,000 networks, cripple Web sites, and slow overall Internet traffic. Code Red is a computer worm, a software program similar to a virus in its ability to replicate and transmit itself to other computer systems linked to the Internet. Once infected, the server is used to launch a denial-of-service attack by bombarding a Web site with bogus requests for information that generally overwhelm the site and force a shutdown.Maiffret and eEye practice what's known in the security industry as full disclosure--the theory that security analysts who learn about a software vulnerability should be willing to publicize everything they know about it in order to pressure software vendors and IT managers to take the risks seriously. "Just saying, 'Look, IIS has a hole, and install this patch,' isn't enough," Maiffret says. "We often publish proof-of-concept tools that administrators use to prove to their bosses that they're vulnerable and that it's serious."
Telling companies to install a patch isn't enough, says eEye's Maiffret. Full disclosure is needed to prove that vulnerabilities are serious. Maiffret's company discovered the flaw in mid-May but waited until Microsoft devised a patch that fixes the flaw before disclosing it. Microsoft published the patch--security bulletin MS01-033--on its security Web site on June 18. The same day, eEye published its own very thorough advisory, including such precise details as the number of bytes a hacker would need to send through the system to exploit the flaw. Detractors charge that such an abundance of information went overboard. "I'm 100% convinced that they published enough information that any skilled hacker could [exploit the flaw]," says Russ Cooper, surgeon general for security firm TruSecure Corp. and moderator of the NTBugtraq E-mail security discussion group, a forum that includes both hackers and security professionals.Microsoft's Steve Lipner, lead program manager for Windows security, describes his company's fine-line stance in the debate this way: "We're in favor of full disclosure, not full exposure," he says, with a playful laugh. Lipner agrees it's critical for customers to have enough information to protect their systems, but he thinks it's wrong to publish concept code, which can demonstrate that vulnerability exists, or source code. "We're not in favor of things that make it easier for the bad guys to break into our customers' systems," he says.The security industry confronts this issue every time a new vulnerability or threat is discovered. It's a race: Can security experts and IT managers fix the problem before the hackers exploit the vulnerability? With Code Red, the bad guys clearly won the race."The Code Red worm struck faster than any other worm in Internet history," says Chris Rouland, director of X-Force, the research team for security software and services vendor Internet Security Systems Inc. According to SecurityFocus.com, a Web site that tracks computer security problems, Code Red first struck on July 12, 24 days after Microsoft and eEye jointly announced the IIS vulnerability. At first, the worm proved a relatively inefficient threat. By July 14, several thousand systems had been infected and by July 18 that number topped 12,000.The next day, something unusual happened: The number of infected systems skyrocketed to roughly 350,000. Security analysts say a new and improved version of the worm appeared on the Internet with a more-efficient way of picking random IP addresses to attack. Code Red version 2 tore through the Internet July 19 and launched yet another attack July 31. Some, such as NTBugtraq's Cooper, say Code Red's explosive "second wind" could be due to the amount of information eEye released publicly. Cooper notes that the flaw has existed for almost four years. "No one exploited the vulnerability--that we know of, and certainly not on this scale--until a little over three weeks after eEye published its advisory information," Cooper says.Maiffret's not apologizing. He argues that, along with convincing IT and software executives to take threats seriously, detailed information helps vendors, such as those that make intrusion-detection systems, upgrade their software and helps system administrators write scripts to ferret out infected systems on their network.There's a middle ground between Maiffret and Cooper. ISS's Rouland is part of what might be described as the "partial disclosure" camp. Rouland says full disclosure unreasonably cuts the time available for IT managers and security experts to fix problems. Hackers can take the information published about a software vulnerability and act almost immediately, while system administrators need to find the people to do an update, test a vendor patch on their system, and convince management that the problem justifies bringing down the network or E-commerce system to install the patch. In addition, publishing a large amount of detailed information helps skilled hackers publish scripts--automated hacking tools that let wannabe hackers, known as script kiddies, cause chaos on the Internet using tools they barely understand.It's those meddling kids that worry IT managers such as Daniel Kesl, information security officer at Newmont Mining Corp. in Denver. "I'm not so worried about professional hackers," Kesl says. "I'm more worried about the script kids running scripts against vulnerable systems every day, causing havoc."Full disclosure has its supporters. "How can we understand vulnerabilities or create defenses without full disclosure of vulnerability information?" asks Fred Cohen, a security expert and author in 1984 of the first paper on computer worms. Even amid the Code Red chaos, Maiffret received some friendly mail through the SecurityFocus Incidents mailing list: "Thank you for the info. We just took a hit from Code Red and using your write-up were able to quickly contain it and fix the hole," wrote someone using the handle "Claymore."Clearly some of the blame falls on IT managers for not installing publicly available patches. Hackers have been known to exploit vulnerabilities weeks, months, sometimes years after flaws have been made public and patches made available. Early last year, a hacker calling himself Curador stole more than 25,000 credit-card numbers from small E-commerce Web sites by exploiting a well-known Microsoft security flaw, even though the vendor had published a patch.In most corporate IT environments, even after a software vendor sends out an alert, the patch job might languish. "Security often takes a backseat to other projects that management deems more important, and the resources aren't always made available to put patches into place immediately--or even within weeks," says a network administrator at a major medical company, who asked not to be identified.In one study of the issue, Windows Of Vulnerability: A Case Study Analysis, William Arbaugh of the University of Maryland and William Fithen and John McHugh of the CERT Coordination Center conclude that in the cases they studied, patches for vulnerabilities were available about a month before sites suffered intrusions. Almost all the reported intrusions stemmed from vulnerabilities that could have been fixed with published security patches.One reason IT managers cite: Applying patches without time to test them can cause trouble. Jeff Brewer, lead security analyst at financial data-services company Fiserv Inc., says he's been burned before. "We applied patches without testing, and it brought us down," he says. "Now, we test everything before applying it. And that takes time."Maiffret says that's a common worry. "Especially for Microsoft patches, NT administrators are more afraid of installing the security patch than of the actual vulnerability," he says. "Microsoft has such a bad track record of publishing patches that break things."Microsoft's Lipner defends the company's record and says its biggest challenge is working to deliver a patch under unreasonable time limits set by security firms eager to take credit for finding a vulnerability. "We're usually under some level of time pressure, and that's a factor," Lipner says. "Sometimes, we learn about vulnerabilities when they're publicly posted to one of the full-disclosure mailing lists."Software companies have changed the debate around full disclosure by reacting more quickly to problems. It used to be common for vendors to resist disclosure of any flaw and try to wait for a new release to resolve a problem. Newmont Mining's Kesl says that attitude would have remained unchanged if not for the threat of the vulnerabilities being disclosed. "Vulnerabilities have to be addressed immediately, or you're facilitating attacks on other systems," he says. "That's why I'm in favor of vulnerabilities being brought public if the vendor refuses to fix the problem in a reasonable period of time. And that, in most cases, is a few weeks."Another factor sparking change is that widespread Internet use makes security issues mainstream news. That means it can be embarrassing for software vendors, but also a nice publicity boost for a small security company, when a flaw gets publicized. Maiffret's experience alone may be enough to keep the practice alive. "If they're going to get published in The Wall Street Journal," predicts Hurwitz Group security analyst Pete Lindstrom, "they're going to keep doing it."Photo by John Faulkner
About the Author(s)
You May Also Like