The UK government on Wednesday introduced two measures that "green-light" driverless cars on UK roads next year, joining authorities in the US, Belgium, China, France, Italy, Japan, and Sweden who have committed to testing autonomous vehicles.
The UK Department of Transportation invited cities to bid to be one of up to three hosts for a 2015 trial of driverless cars. The trial is expected to last 18 to 36 months. The UK government also plans to review road regulations to determine whether its infrastructure can accommodate both cars with driver-assistance technology and fully autonomous cars.
The UK aims to make sure it remains at the forefront of an area of active research and future economic promise. "Driverless cars have huge potential to transform the UK’s transport network -- they could improve safety, reduce congestion and lower emissions, particularly CO2," said UK Transportation Minister Claire Perry in a statement.
They could, but they could also cause problems. There appear to be enough of these problems to be worked out that autonomous cars will remain research projects for many years to come.
[There is some clever automotive technology that will arrive before self-driving cars. Read Smarter Cars: 9 Tech Trends.]
To understand these problems, it helps to consider alternative names for these computer-controlled vehicles, because "self-driving car" overstates what these machines can do and omits obvious issues. Here are a few:
Earlier this month, The Guardian obtained an unclassified FBI report that warns, "Autonomy … will make mobility more efficient, but will also open up greater possibilities for dual-use applications and ways for a car to be more of a potential lethal weapon that it is today." You can't have your self-driving car because it can double as a missile on cruise-control. The same problem will doom drones in inhabited areas. Someone will fly a bomb or bioweapon into a schoolyard or political rally and restrictions will explode.
Autonomous vehicles are supposed to be ecologically friendly. But as has been pointed out in the New Republic and Gizmodo, a driverless car is another car on already congested roads and a vote against more efficient mass transit. If Silicon Valley really had public welfare and quality of life in mind, it would be working on buses and trains, and on promoting car-sharing services like Getaround, City CarShare, and ZipCar as alternatives to car ownership.
The autonomous vehicle industry probably creates a lot of work for automotive engineers and computer scientists. But autonomous vehicles themselves will eliminate a job that doesn't require as much education: driver. At the Re/Code conference in May, Uber cofounder and CEO Travis Kalanick explained that Google's self-driving cars could be helpful for Uber's business because trips become cheaper when there's no driver to pay. Every trucking company and limousine service is thinking the same thing.
According to The Financial Times, some in the car insurance industry see self-driving vehicles as "an existential threat to traditional car insurers because of expected improvements in road safety."
Perhaps such jobs should be eliminated. Out with the old, in with the new, right? Except it's never that simple. London's cabbies have already staged protests over the threat posed by Uber and other Internet-managed car services. They will show less restraint when the cars they're protesting disgorge their passengers and wait unoccupied.
In 2010, Sebastian Thrun, then an engineer at Google, estimated that self-driving technology could save 600,000 lives a year, half of the 1.2 million annual driving fatalities, according to World Health Organization figures. That's half as many as we could save if we banned driving worldwide. As an added bonus, a total driving ban would save the Earth's atmosphere from being overwhelmed by automotive carbon emissions. Welcome to bike utopia.
Imagine how many lives we could save if we let software handle our governance -- no war, billions in defense money shifted to healthcare and longevity research! -- or handle our grocery shopping -- no junk food, no alcohol, no cigarettes, less heart disease! Humanity saved! Think of the development potential of all those suddenly unnecessary parking spaces.
Autonomous vehicles will save lives by taking people who drive recklessly out of the equation. But that risk can't be eliminated while human drivers are on the road. Just how far we should go in seeking to dehumanize driving for our own safety may not be a simple matter of statistics. Now ask yourself which surveillance-crazed regime would love to see more of its citizens tracked in network-connected mobile listening devices.
In truth, reducing auto accidents is a worthy goal. But take a look at Google's self-driving car prototype. Does it look sturdy to you? Google says its prototypes will be limited to 25 mph. Yes, that will save lives, particularly since one of the primary markets for these vehicles is likely to be old people with diminished driving ability. Of course, any car that keeps to 25 mph, whether human- or computer-controlled, is probably safer than a car moving faster.
But what happens when a truck crashes into the side of one of Google's tiny little robocars? Will passengers be protected by the airbags and frame strength of a traditional automobile? Better overall accident statistics matter, but they won't matter to those in autonomous cars if those vehicles offer less protection than could be had in driving old-style.
"Take me to Coit Tower."
"I heard you say, 'Goiter.' Is this correct?"
"No, Coit Tower."
"I'm sorry, I didn't understand that. Could you say that again?"
"I want to go to Coit Tower, in San Francisco."
"I can't find any Coy Tower in San Francisco. Is there somewhere else you'd like to go?"
"Just let me drive."
"I'm sorry, Dave. I'm afraid I can't do that."
Even if self-driving cars can understand the vast range of passenger accents they're bound to encounter, can manage people with disabilities, and can correct for the inadvertent misdirection provided by clueless passengers, they're bound to end up in a situation their programming can't handle. Google's engineers are still refining how to handle weather and construction. A bug strike on a sensor -- messy at sufficient speed -- could throw off readings. Google Voice Search, Siri, and Cortana aren't yet ready to take the wheel.
Computer security is abysmal. Reports of hacked systems appear daily. Self-driving cars will be no different. A minute of GPS spoofing or jamming on a major road or bridge could bring self-driving cars to a halt, creating a major traffic jam. Are we ready for denial-of-highway attacks, for the Internet of Things moving at rapid speed?
Random killing machines
The term "self-driving" glosses over the role of software in doing the actual driving. We aren't ready for algorithms that make life-and-death decisions. What happens when a self-driving car sees a child dash into the road, too close to brake? Does the car swerve? If there are people present on both the right and the left, which ones will the software choose to run over? Should the software select its victim by height -- save the short one in the road, a presumed child?
Should it assess the silhouette of a possible impact victim to see whether that person might be pregnant? If the software doesn't make that choice, if it chooses a victim randomly, how does that play out in court? A person would just react. But software can deliberate in milliseconds. Should the programmer be held accountable for choosing, or selecting randomly, who gets struck when there's no other option?
Captive audience cars
Google is an advertising company, so it would not be unexpected if Google self-driving cars incorporated ads in some form. The company already has a patent for ad-subsidized taxi service. If it presented passengers with ads to defray costs in its self-driving cars, it would only be doing what traditional taxi services have already done. How long would it take before ad blocking took root in Google's cars, in the form of duct-tape-covered displays and speakers gagged with chewing gum?
Self-driving cars initially will be assisted-driving cars: They will watch over the shoulder of drivers and attempt to intervene only under extreme circumstances. Computer-aided traction systems and auto-braking systems already do this, but these systems will become more comprehensive.
Even so, partial automation poses its own problems. As MIT researchers noted last year, automated systems contribute to boredom, which in turn degrades human oversight and may lead to accidents. "[T]he human brain requires some level of stimulus to keep its attention and performance high," the researchers wrote. "Lacking this input, they seek it elsewhere, leaving them susceptible to distraction either by external stimuli or by the wrong information." Your collision avoidance system might just make you a worse driver.
These potential pitfalls can be overcome, but doing so will take more than a few years. During this period, we should consider self-driving cars in a larger social context. Transportation is more than an issue of health and safety. It touches on politics, personal autonomy, privacy, public infrastructure, and privilege. We need to decide whether we want a self-driven society or whether we're content just to go along for the ride.
In its ninth year, Interop New York (Sept. 29 to Oct. 3) is the premier event for the Northeast IT market. Strongly represented vertical industries include financial services, government, and education. Join more than 5,000 attendees to learn about IT leadership, cloud, collaboration, infrastructure, mobility, risk management and security, and SDN, and explore 125 exhibitors' offerings. Register with Discount Code MPIWK to save $200 off Total Access & Conference Passes.