Eric McCarty claims he hacked into the University of Southern California's computer system to warn of its vulnerabilities. The case could be a watershed event in the area of security research.

Larry Greenemeier, Contributor

May 9, 2006

5 Min Read

Eric McCarty, a 25-year-old San Diego resident, in April was charged with hacking into the University of Southern California's computer system and accessing confidential information submitted by students applying to the school. The case, in which McCarty claims he was simply trying to warn USC of possible security flaws in its Web site, will likely be a watershed event in the area of security research, particularly if McCarty is convicted to the full extent of the law and forced to serve 10 years in a federal prison.

McCarty's case lifts the hood on Web security, exposing a number of legal, ethical, and technical questions that to date have no easy answers. No one disagrees that McCarty broke the law. Whether McCarty was wrong or unethical is an altogether different question. And then there's the matter of the penalty for his indiscretion. A decade in a federal prison comes across as a bit extreme to many IT security pros, particularly considering McCarty's willingness to cooperate with the FBI once the bureau began its investigation.

The SQL database connected to USC's online applicant Web site contains Social Security numbers, birth dates, and other information for more than 275,000 applicants since 1997. After finding a vulnerability in the site's login system, McCarty staged an SQL injection attack to gain access to the database. An SQL injection takes place when a hacker enters instructions into an improperly secured Web data field in order to gain control of that application. USC's site was subsequently shut down for two weeks during June 2005 as the university addressed the issue. McCarty made his initial appearance in U.S. District Court in Los Angeles on April 28.

Many security pros agree that McCarty's intention of improving the security of USC's Web site was commendable, and that USC should acknowledge this. But these same security pros are negative on McCarty's move to hack the site without first getting permission from the university.

"McCarty was trying to prove a point," says Rick Fleming, VP of security and risk management consulting for Digital Defense Inc., which offers penetration testing services. "Part of me commends him for saying, 'Hello, wake up.' But he crossed an ethical boundary because he didn't have permission to test that system, and he broke the law."

Security researchers are at the most basic level guided by an online document known as RFPolicy, which unofficially lays out the process for researchers to communicate with software developers and vendors about any bugs the researchers find in the developer's software. The purpose of RFPolicy, which was conceived by a security expert known as Rain Forest Puppy, is "to quash assumptions and clearly define intentions, so that both parties may immediately and effectively gauge the problem, produce a solution, and disclose the vulnerability."

RFPolicy, which dates back five or six years, was designed to police how researchers disclose vulnerabilities to software vendors, says Jeremiah Grossman, a former Yahoo information security officer who's now founder and chief technology officer with Web application security provider WhiteHat Security Inc. "Everyone was intrigued that someone put a line in the sand," he says of security researchers' reaction to this informal edict.

But RFPolicy isn't recognized by any standards or legal entity. It also doesn't address the crucial question of how researchers can legally go about finding flaws in Web applications running in someone else's IT environment. "I can buy a Windows license and rip [the software] apart until my heart's content and not break any laws," Grossman says. The same can't be said of a Web-based E-mail program, for example, even if it contains a user's personal E-mails, because the user doesn't own the servers that run the program.

"I don't believe RFPolicy can be applied to McCarty's case because the ethics of the document only apply to the disclosure of vulnerabilities, not their discovery," Grossman says. "RFPolicy is suitable for vulnerabilities identified within applications, like Windows, Apache, IIS, Linux, Oracle, etc.--software that researchers may install and test on computers they own, so the discovery portion is a nonissue. Web application security vulnerabilities are completely different because by their nature they exist on someone else's computers."

McCarty's case isn't completely without precedent. Late last year, Londoner Daniel Cuthbert, a former employee of ABN Amro, was fined more than $700 and ordered to pay more than $1,000 in court costs after he was convicted under England's 1990 Computer Misuse Act of gaining unauthorized access to a Web site collecting donations for victims of the 2004 tsunami. Cuthbert claimed that he carried out two tests to check the security of the site after he suspected his donation to the relief effort had gone to a phishing site rather than the actual charity.

Grossman cites last year's attack on the MySpace Web site by an unknown programmer calling himself "Samy" as a much more egregious example of Web application hacking than McCarty or Cuthbert. Samy, who was never caught, inserted code into his MySpace user profile so that those viewing his profile would have their own profile corrupted. More importantly, Samy, unlike McCarty or Cuthbert, executed his exploit without the intention of improving security on his victim's site.

Regardless of McCarty's fate, security research won't stop. "You'll just push it underground," Grossman says. If a researcher is wearing a so-called "white hat" and testing programs for the purpose of improving security, they're likely to be hesitant to report their findings through official channels for fear of legal repercussions. "And if the good guys aren't going to do this research, that's a bad thing because the bad guys certainly won't stop," he adds.

Fleming agrees that McCarty's fate will likely determine whether other penetration testers continue their work in the open or stay off the radar. If the existence of security flaws goes unnoticed by companies running Web sites, "the net effect is that companies will say, 'No one's finding flaws in my software, so I must be secure,'" he says. "That's only true if people are looking for these flaws."

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights