One of the biggest problems facing computer network administrators is the lag between the discovery of a cyber-vulnerability or an attack and the time it takes before the incident is remedied across the entire network. The Defense Advanced Research Projects Agency (DARPA) is looking to address that by sponsoring a Cyber Grand Challenge.
First announced in October, the agency has released new details for potential participants in the tournament. DARPA is trying to advance the development of automated cyber-reasoning by having competing teams develop systems capable of evaluating software, identifying flaws, creating patches, and deploying them on a network in real-time.
According to a broad agency announcement, competitors can choose one of two tracks: an unfunded track, where competitors will underwrite the cost of the competition themselves; and a funded track, in which teams can apply for funding. DARPA will make a determination of teams' qualifications and select competitors.
[Keep up with DARPA's latest initiatives. See 10 Cool DARPA Projects In Development.]
This kind of automated system, called program analysis, "has been around for quite a while. The feeling is that it's on the verge of a breakthrough and ready to move out of the lab and into the field," said Mike Walker, the DARPA program director for the challenge, in a Dec. 6 telephone conference call with reporters. The unmanned cyberdefense tournament will last more than two years; the overall winner will receive a first-place prize of $2 million, with the second-place finisher receiving $1 million, and third place getting $750,000.
Walker said there is "a lot of interest" in the security community, but he declined to say how many teams have registered to date. The field of competitors will compete in a qualification round, currently scheduled for June 2015. The final round is planned for July 2016.
DARPA is designing a custom environment for the competition, to provide "a field of their own," Walker said. "The software written just for the competition is given to the competitors at the same time." The point is not to test the teams' knowledge of existing software, operating systems, and bugs, but to test the analytical skills of the teams' automated products, he told us.
"Analysis of code will work on known and unknown protocols. We want to make sure our measurement is not polluted -- if I know there are unknown protocols in there, I will focus entirely on adaptation capability."
Walker said the challenge would be based on the C-language family, with binaries created just for the challenge. The broad agency announcement includes a list of known defect types published by Mitre, a nonprofit, federally funded research and development firm, so that teams will have an idea of the kinds of realistic flaws the challenge will require them to find and fix.
While the long-term goal is to develop a deployable network defense that can protect military networks, Walker said the agency is not expecting that outcome by the end of this competition
"If we look at the 2004 Vehicle Grand Challenge," for teams to build self-driving vehicles to negotiate a 150-mile desert course, "those prototypes were not ready to roll out of the competition and onto America's highways," he said. "I think that what we're going to transition out of this is the lessons of competition, the ideas, and the correct application of how to build [this]."
Moving email to the cloud has lowered IT costs and improved efficiency. Find out what federal agencies can learn from early adopters in The Great Email Migration report. (Free registration required.)