Drivers Prefer Autonomous Cars That Don't Kill Them
A new study shows that most people prefer that self-driving cars be programmed to save the most people in the event of an accident, even if it kills the driver. Unless they are the drivers.
Tesla Model 3, BMW i3: 10 Electric Vehicles To Own
Tesla Model 3, BMW i3: 10 Electric Vehicles To Own (Click image for larger view and slideshow.)
A car is about to hit a dozen pedestrians. Is it better for the car to veer off the road and kill the driver but save the pedestrians? Or is it better to save the driver and kill all those other people? That's the thorny philosophical question that the makers of autonomous vehicles -- self-driving cars -- are grappling with these days, and a new study sheds some light on what people actually want that car to do.
It turns out that the answer depends on whether you are the driver of the car or not. People generally think it's a good idea for autonomous vehicles to save the most lives. The study calls this a utilitarian autonomous vehicle, which fits the utilitarian moral doctrine of the greatest good for the greatest number.
Generally, people would like it if everyone would use cars that saved the most lives, even if they had to sacrifice the life of the driver to do so. With one exception.
They don't want to have to buy this kind of car for themselves.
Generally, for themselves, people would prefer to buy the vehicle that would ensure for the greatest safety for the driver and passengers.
"Even though participants still agreed that utilitarian AVs were the most moral, they preferred the self-protective model for themselves," the authors of the study wrote.
These results have significant implications for whether manufacturers can be commercially successful with these utilitarian algorithms, and whether regulating autonomous vehicles to require these algorithms will be successful or not.
[Like the idea of self-driving cars? What about flying ones? Read Google's Larry Page Investing Millions in Flying Cars.]
"The study participants disapprove of enforcing utilitarian regulations for [autonomous vehicles] and would be less willing to buy such an AV," the study's authors wrote. "Accordingly, regulating for utilitarian algorithms may paradoxically increase casualties by postponing the adoption of safer technology."
This new study is authored by academics from three universities across the disciplines of economics, psychology, and social science: Jean-François Bonnefon of the Toulouse School of Economics, Azim Shariff of the University of Oregon department of psychology, and Iyad Rahwan of the media lab at MIT.
The researchers conducted six separate online surveys with 1,928 total participants in 2015. Each survey asked a different set of questions, progressively getting closer to the moral dilemma at hand.
The first survey, which included 182 participants, and the second survey, which had 451 respondents, presented a dilemma that varied the number of pedestrian lives that could be saved.
The third survey, which had 259 participants, introduced for the first time the idea of buying an autonomous vehicle that would be programmed to minimize casualties, even if that meant sacrificing the driver and passengers to save the lives of more pedestrians. This survey also introduced the result that participants would prefer the self-protective model for themselves.
Survey number four, which had 267 respondents, added more detail to that moral dilemma with a rating system for various algorithms. It provided a similar result to study three: "It appears that people praise utilitarian, self-sacrificing AVs and welcome them on the road, without actually wanting to buy one for themselves."
Report authors say that this is the "classic signature of a social dilemma, in which everyone has a temptation to free-ride instead of adopting the behavior that would lead to the best global outcome. One typical solution in this case is for regulators to enforce the behavior leading to the best global outcome."
That happens in some cases. For instance, some citizens object to regulations that require the immunization of children before they start school, the study's authors note.
The authors further tested this idea of regulating these utilitarian autonomous vehicles in the fifth survey, which included 376 participants. Results varied, depending upon how many pedestrians were actually saved. They were given scenarios of one to ten pedestrians.
In the sixth survey, the authors asked if the 393 participants would buy vehicles with utilitarian algorithms mandated by the government. Participants said they were less likely to buy such a vehicle that came with this mandated algorithm than to buy a self-driving car that didn't have the algorithm.
"If both self-protective and utilitarian AVs were allowed on the market, few people would be willing to ride in utilitarian AVs, even though they would prefer others to do so," the authors concluded. "… Our results suggest that such regulation could substantially delay the adoption of AVs, which means that the lives saved by making AVs utilitarian may be outnumbered by the deaths caused by delaying the adoption of AVs altogether."
The study's authors said that figuring out how to build ethical autonomous machines is one of the thorniest challenges in artificial intelligence today, and right now there's no simple answer.
Today's self-driving car regulations are really a patchwork of state regulations. In April, a handful of technology and automotive companies announced the formation of the Self-Driving Coalition for Safer Streets to accelerate federal regulations around the move to driverless cars.
Several carmakers and technology companies are working on making autonomous vehicles, including Toyota, Google, Acura, BMW, and many others.
About the Author
You May Also Like
2024 InformationWeek US IT Salary Report
May 29, 20242022 State of ITOps and SecOps
Jun 21, 2022