Field Report: DARPA Grand Challenge, Primm, Nev. - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
Software // Information Management
News
10/26/2005
02:15 PM
Connect Directly
Google+
LinkedIn
Twitter
RSS
E-Mail
50%
50%

Field Report: DARPA Grand Challenge, Primm, Nev.

Racing robots follow the "rules" of the offroad

Robot history was made on October 8th in the second annual DARPA Grand Challenge, a desert race held near Las Vegas featuring autonomous unmanned vehicles. Sponsored by the Defense Advanced Research Projects Agency, the event was created as a way to spur progress toward developing unmanned battlefield vehicles. Among the technologies guiding the bots were artificial intelligence and the same rules engine technology used in credit checks and business process management deployments.

In the premier DARPA Grand Challenge last year, $1 million was offered to the first vehicle to cross the finish line in less than 10 hours, but none of the competitors finished the race. In fact, the top robot traveled less than eight miles. This year's prize was increased to $2 million, and of the 195 applicants, 43 made the cut as semi-finalists and 23 as finalists.

Five vehicles completed the 132-mile obstacle course. The winner was Stanford [University] Racing Team's "Stanley," a modified Volkswagen Touareg outfitted with GPS, laser range finders, radar, vision systems and seven Pentium M computers to analyze sensor data and geospatial mapping information (www.stanfordracing.org). Stanley finished the course in 6 hours and 54 minutes, averaging 19 miles per hour.

Second place went to the Red Team (www.redteamracing.org) from Carnegie Mellon University. "Sandstorm," a modified first-generation Hummer, finished the course in 7 hours and 4 minutes. Carnegie Mellon also took third place with "H1LANDER," a modified H1 Hummer that finished in 7 hours and 14 minutes.

What made the difference in this year's performance? The teams "have great software and great smarts," said DARPA director Tony Tether. "The Stanford machine uses a learning algorithm and it actually learns the more it's used. I'm told it has hundreds and perhaps more than 1,000 miles of experience in desert terrain."

Race officials withheld final course information until two hours before the race, forcing contestants to rely on more than preprogrammed GPS waypoints. Tank traps, parked vehicles, cones and tunnels forced vehicles to deal with unmapped obstacles in real time. That's where the combination of vision systems, radar and range-finding sensors came in.

In contrast to the large, university-lead teams, semi-finalist Team Jefferson, from startup company Perrone Robotics (www.perronerobotics.com/dgc), competed on a bare-bones budget using open-source and donated off-the-shelf commercial software including a Blaze Advisor rules engine from Fair Isaac. The team's customized dune buggy, "Tommy," relied on Perrone's Java-based Mobile Autonomous X-bot software platform. Fair Isaac's rules engine was used offline to compare the final course data with available geospatial mapping data to program a safest-possible route in less than two hours.

"Last year the teams were pouring through the course data manually within that two-hour window," Perrone explained. Determining routes is "an example of something that can be easily automated with a rules engine. We're going to let the rules engine pour through the data and plot those points, and it will do it accurately within 20 minutes."

Unfortunately, Tommy wasn't a finalist due to an obstacle avoidance mishap during a qualifying trial: The vehicle swerved to avoid a barrier and crashed into a concrete wall. Thus, the offline rules-based system was never put to the test of plotting the Grand Challenge course.

The winning Stanford team had nearly 70 faculty and student contributors, drawn largely from the School of Engineering, and it spent 15 months developing and testing software and systems.

DARPA officials said its research goals had been met, so it has no plans to hold future challenges. The military must now improve the durability and reliability of the technologies. Eighteen of the 23 robot vehicles failed to complete the course.

— Doug Henschen

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Commentary
Learning: It's a Give and Take Thing
James M. Connolly, Editorial Director, InformationWeek and Network Computing,  1/24/2020
Slideshows
IT Careers: Top 10 US Cities for Tech Jobs
Cynthia Harvey, Freelance Journalist, InformationWeek,  1/14/2020
Commentary
Predictions for Cloud Computing in 2020
James Kobielus, Research Director, Futurum,  1/9/2020
White Papers
Register for InformationWeek Newsletters
Video
Current Issue
The Cloud Gets Ready for the 20's
This IT Trend Report explores how cloud computing is being shaped for the next phase in its maturation. It will help enterprise IT decision makers and business leaders understand some of the key trends reflected emerging cloud concepts and technologies, and in enterprise cloud usage patterns. Get it today!
Slideshows
Flash Poll