Robot Race Advances Military Technology

Robot history was made on October 8th in the second annual running of the DARPA Grand Challenge, a desert race held near Las Vegas featuring autonomous unmanned vehicles.

Doug Henschen, Executive Editor, Enterprise Apps

October 10, 2005

4 Min Read

Robot history was made on October 8th in the second annual running of the DARPA Grand Challenge, a desert race held near Las Vegas featuring autonomous unmanned vehicles. Sponsored by the Defense Advanced Research Projects Agency, the event was dreamed up as a way to spur progress toward developing unmanned vehicles for the battlefield. Among the technologies guiding the bots were artificial intelligence and the same rules engine technology used in credit checks and business process management deployments.

Last year, the DARPA Grand Challenge offered a $1 million prize for the vehicle to cross the finish line in less than 10 hours, but none of the competitors finished the course. In fact, the top vehicle traveled less than eight miles before it became mired in rugged terrain. This year's prize was increased to $2 million and, among the 195 applicants, 43 were selected as semifinalists and 23 as finalists. Five vehicles ultimately completed the 132-mile, obstacle-strewn course.

The winner was Stanford [University] Racing Team "Stanley", a modified Volkswagen Touareg outfitted with GPS, laser range finders, radar, vision systems and seven Pentium M computers using artificial intelligence software to analyze sensor data and geospatial mapping information. Stanley finished the course in 6 hours and 54 minutes, averaging 19 miles an hour.

Second was the Red Team from Carnegie Mellon University with "Sandstorm," a modified first-generation Hummer that finished the course in 7 hours and 4 minutes. Carnegie Mellon also took third place with "H1LANDER," a modified H1 Hummer that finished in 7 hours and 14 minutes.

What made the difference in this year's performance? "[The teams] have great software and great smarts," says Darpa Director Tony Tether. "The Stanford machine uses a learning algorithm, and it actually learns the more it is used. I'm told it has hundreds and perhaps more than 1,000 miles of experience in desert terrain."

Race officials withheld final course information until two hours before the race, forcing contestants to rely on more than preprogrammed GPS waypoints to lead the vehicles along a safe course. Tank traps, parked vehicles, cones and tunnels were also used to force vehicles to deal with obstacles in real time. That's where the combination of vision systems, radar, laser radar and artificial intelligence software came in.

In contrast to the large university-lead teams, semi-finalist Team Jefferson, from startup company Perrone Robotics, tried to compete on a bare-bones budget using open-source and donated off-the-shelf commercial software including a Blaze Advisor rules engine from Fair Isaac. The team's customized dune buggy, "Tommy," relied on Perrone's Java-based Mobile Autonomous X-bot (MAX) software platform combined with radar, laser radar and GPS sensor technology for online navigation and obstacle avoidance. Fair Isaac's rules engine was to be used offline to compare the final course data supplied by DARPA to available geospatial mapping data to program a safest-possible route—avoiding steep banks, gullies, cliffs and other impasses—within less than two hours.

"Last year, the teams were pouring through the course data manually within that two-hour window, and they would have 50 people deciding, 'okay, let's go from this point to this point to this point and this point,'" Perrone explained during a qualifying round. "That's an example of something that can be easily automated with a rule engine. We're going to let the rule engine pour through the data and plot those points, and it will do it accurately within 20 minutes."

Unfortunately, Tommy didn't make the last cut because of an obstacle avoidance mishap. Traveling at nearly 40 miles per hour during a qualifying trial, the vehicle swerved to avoid a barrier and crashed into a concrete wall. Thus, the offline rules-based course plotting system was never put to the test of calculating the Grand Challenge course.

Stanford Racing, with a team of nearly 70 composed largely of faculty and students of Stanford's School of Engineering, spent 15 months developing and testing its software and systems.

DARPA officials said the goal of developing and demonstrating suitable technologies had been achieved, so there are no plans to hold future events. Researchers will now face the task of improving the durability and reliability of the technologies. Eighteen of the finalist vehicles failed to complete the course.

About the Author(s)

Doug Henschen

Executive Editor, Enterprise Apps

Doug Henschen is Executive Editor of InformationWeek, where he covers the intersection of enterprise applications with information management, business intelligence, big data and analytics. He previously served as editor in chief of Intelligent Enterprise, editor in chief of Transform Magazine, and Executive Editor at DM News. He has covered IT and data-driven marketing for more than 15 years.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like

More Insights