Drivers Prefer Autonomous Cars That Don't Kill Them - InformationWeek
IoT
IoT
IT Life
News
6/25/2016
12:06 PM
Connect Directly
Twitter
RSS
E-Mail
50%
50%

Drivers Prefer Autonomous Cars That Don't Kill Them

A new study shows that most people prefer that self-driving cars be programmed to save the most people in the event of an accident, even if it kills the driver. Unless they are the drivers.

Tesla Model 3, BMW i3: 10 Electric Vehicles To Own
Tesla Model 3, BMW i3: 10 Electric Vehicles To Own
(Click image for larger view and slideshow.)

A car is about to hit a dozen pedestrians. Is it better for the car to veer off the road and kill the driver but save the pedestrians? Or is it better to save the driver and kill all those other people? That's the thorny philosophical question that the makers of autonomous vehicles -- self-driving cars -- are grappling with these days, and a new study sheds some light on what people actually want that car to do.

It turns out that the answer depends on whether you are the driver of the car or not. People generally think it's a good idea for autonomous vehicles to save the most lives. The study calls this a utilitarian autonomous vehicle, which fits the utilitarian moral doctrine of the greatest good for the greatest number.

Generally, people would like it if everyone would use cars that saved the most lives, even if they had to sacrifice the life of the driver to do so. With one exception.

They don't want to have to buy this kind of car for themselves.

Generally, for themselves, people would prefer to buy the vehicle that would ensure for the greatest safety for the driver and passengers.

"Even though participants still agreed that utilitarian AVs were the most moral, they preferred the self-protective model for themselves," the authors of the study wrote.

These results have significant implications for whether manufacturers can be commercially successful with these utilitarian algorithms, and whether regulating autonomous vehicles to require these algorithms will be successful or not.

[Like the idea of self-driving cars? What about flying ones? Read Google's Larry Page Investing Millions in Flying Cars.]

"The study participants disapprove of enforcing utilitarian regulations for [autonomous vehicles] and would be less willing to buy such an AV," the study's authors wrote. "Accordingly, regulating for utilitarian algorithms may paradoxically increase casualties by postponing the adoption of safer technology."

This new study is authored by academics from three universities across the disciplines of economics, psychology, and social science: Jean-François Bonnefon of the Toulouse School of Economics, Azim Shariff of the University of Oregon department of psychology, and Iyad Rahwan of the media lab at MIT.

The researchers conducted six separate online surveys with 1,928 total participants in 2015. Each survey asked a different set of questions, progressively getting closer to the moral dilemma at hand.

The first survey, which included 182 participants, and the second survey, which had 451 respondents, presented a dilemma that varied the number of pedestrian lives that could be saved.

The third survey, which had 259 participants, introduced for the first time the idea of buying an autonomous vehicle that would be programmed to minimize casualties, even if that meant sacrificing the driver and passengers to save the lives of more pedestrians. This survey also introduced the result that participants would prefer the self-protective model for themselves.

(Image: Henrik5000/iStockphoto)

(Image: Henrik5000/iStockphoto)

Survey number four, which had 267 respondents, added more detail to that moral dilemma with a rating system for various algorithms. It provided a similar result to study three: "It appears that people praise utilitarian, self-sacrificing AVs and welcome them on the road, without actually wanting to buy one for themselves."

Report authors say that this is the "classic signature of a social dilemma, in which everyone has a temptation to free-ride instead of adopting the behavior that would lead to the best global outcome. One typical solution in this case is for regulators to enforce the behavior leading to the best global outcome."

That happens in some cases. For instance, some citizens object to regulations that require the immunization of children before they start school, the study's authors note.

The authors further tested this idea of regulating these utilitarian autonomous vehicles in the fifth survey, which included 376 participants. Results varied, depending upon how many pedestrians were actually saved. They were given scenarios of one to ten pedestrians. 

In the sixth survey, the authors asked if the 393 participants would buy vehicles with utilitarian algorithms mandated by the government. Participants said they were less likely to buy such a vehicle that came with this mandated algorithm than to buy a self-driving car that didn't have the algorithm.

"If both self-protective and utilitarian AVs were allowed on the market, few people would be willing to ride in utilitarian AVs, even though they would prefer others to do so," the authors concluded. "… Our results suggest that such regulation could substantially delay the adoption of AVs, which means that the lives saved by making AVs utilitarian may be outnumbered by the deaths caused by delaying the adoption of AVs altogether."

The study's authors said that figuring out how to build ethical autonomous machines is one of the thorniest challenges in artificial intelligence today, and right now there's no simple answer.

Today's self-driving car regulations are really a patchwork of state regulations. In April, a handful of technology and automotive companies announced the formation of the Self-Driving Coalition for Safer Streets to accelerate federal regulations around the move to driverless cars.

Several carmakers and technology companies are working on making autonomous vehicles, including Toyota, Google, Acura, BMW, and many others.

Jessica Davis has spent a career covering the intersection of business and technology at titles including IDG's Infoworld, Ziff Davis Enterprise's eWeek and Channel Insider, and Penton Technology's MSPmentor. She's passionate about the practical use of business intelligence, ... View Full Bio

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Page 1 / 2   >   >>
IlyaT612
50%
50%
IlyaT612,
User Rank: Apprentice
7/23/2016 | 10:02:28 PM
Re: I'd want an AV to be way smarter...
An AV fixes that much in the same way mass transit does -- you do not have to pay attention to the road, and can spend your time reading, eating, sleeping, or even working. Yes, you will still sit in traffic, but it will not be a wasted time.
vnewman2
50%
50%
vnewman2,
User Rank: Ninja
6/30/2016 | 11:05:06 PM
Re: I'd want an AV to be way smarter...
Right - @TerryB - it's the not the driving itself (unless you are doing long-hauls all the ime perhaps) but the issues you encounter while driving.  Cue the opening scene to "Office Space."


And here's our first casualty: "Ex-Navy SEAL becomes the first motorist to die in self-driving car after Tesla autopilot crash - just a month after filming himself in near miss"
Konradt
50%
50%
Konradt,
User Rank: Apprentice
6/30/2016 | 2:23:48 AM
Would you want to die because a pedestrian was crossing the street without looking?
I'm totally against the idea that they would implement a strategy to minimize casualities, if it would result in killing the people in the car.

In essence the car is driven by the computer. Hence the car is designed to optimize the safety. If it would come into a situation that it has to choose between killing pedestrians or the occupants of that car, then it is either faulty programming. Or maybe even the pedestrians fault to cross a street at a moment which is not permitted.

If these kind of technology would be implemented you could actually (un)intentionally kill people by crossing the street.
will_a
100%
0%
will_a,
User Rank: Apprentice
6/29/2016 | 9:38:47 PM
Re: Great headline...
The universally "right thing" for any computer system to do is simply:
  1. Faithfully execute the commands of its operator
  2. Seek the interests of its operator, so long as it does not undermine the first rule

We could talk about a computer serving local laws or the interest of society in general, but those things would be a distant third and probably should not even be listed.

I realize it is a strong statement I am making.  After all, what if the operator is doing bad things?  My answer is that simply isn't a problem for computers to solve.  An operator doing bad things is a social or political problem, not a computational one.

With how personal and pervasive computers have become/are becoming, its like each person being linked to a kill-switch.  If those kill-switches are concentrated into the hands of a few (corporations, governments, doesn't matter) the risk for those few to do evil things is just too great.  Only by enforcing that each person owns their own kill-switch can we prevent such disasters.
will_a
50%
50%
will_a,
User Rank: Apprentice
6/29/2016 | 9:17:38 PM
Patients Prefer Doctors That Don't Kill Them
In other news:
  • polls indicate people prefer doctors that harvest organs from a single healthy patient if it will save the lives of more sick people, except when the person being polled is the healthy patient.
  • polls indicate people prefer lawyers who seek to maximize justice, even if it means turning on their client, except when the person being polled is the client.

I jest, of course, but given the obvious parallels with autonomous vehicles, why do we not favor utilitarian doctors or lawyers?

The one big reason is that performing such utilitarian calculations is far more difficult obeying a duty to its occupants.  An AV is far more likely to correctly save its occupants then it is to correctly save some arbitrary number of pedestrians, other motorists, etc.  In other words, it is more likely that a utilitarian AV will drive itself and its occupants off a cliff in order to save a group of scarecrows (that it thinks are people) than a dutiful AV is to drive into a crowd while failing to save its occupants.

There is also the broader, systemic picture.  Even if it is true a doctor can save more people in the moment by sacrificing one healthy patient, more people overall are probably saved by the find that all doctors are bound by some form of the hippocratic oath. Ditto for laywers.

This artile even notes:

"Our results suggest that such regulation could substantially delay the adoption of AVs, which means that the lives saved by making AVs utilitarian may be outnumbered by the deaths caused by delaying the adoption of AVs altogether."

If people willing choose to sacrifice themselves, that is one thing. However, an AV acting on behalf of an individual cannot make that choice for them.

TerryB
50%
50%
TerryB,
User Rank: Ninja
6/29/2016 | 12:43:08 PM
Re: I'd want an AV to be way smarter...
>> How about looking at what causes people to hate driving so much and change that?

I don't think it's driving most people hate, it's sitting (or going 20mph in 60mph zone) no one likes. You could have a $250,000 Ferrari and rush hour in any city of any size would still suck. Not exactly sure AV fixes that, only better mass transit. Or your idea of making personal choices to be on road less, kill some congestion.
vnewman2
50%
50%
vnewman2,
User Rank: Ninja
6/28/2016 | 10:58:18 PM
Re: I'd want an AV to be way smarter...
@moarsauce123  "Stapling computers onto something that did not change much in the past 100 years is just falling way too short."


That's a brilliant way to describe it.  I do think this is a misguided attempt at making what people consider to be a chore much easier.  Driving is a privilege, but most consider it a hassle.  How about looking at what causes people to hate driving so much and change that?  What if more companies made an effort to hire locally?  What if people supported local businesses more often?  

I personally have cut nearly all of my driving down to about a 3-5 mile radius.  it takes a little work and planning but those who live in urban areas should be able to do so fairly easily.  Plenty of folks wouldn't be able to feasibly do this because of logistics alone, but plenty of people could - get those people off the roads and maybe driving becomes a pleasant experience again.

I love the idea of a conveyer belt concept.  I'll take that over AI any day.
moarsauce123
50%
50%
moarsauce123,
User Rank: Ninja
6/28/2016 | 8:13:51 AM
I'd want an AV to be way smarter...
...and not even get into a situation where the only choice is between killing more or less people. At that point the system already encountered a catastrophic failure.

AVs need to detect how many potential causes for an accident are around and adjust to that by reducing travel speed and analyizing alternative paths. For example, if there is a pedestrian on a sidewalk then lower the speed a bit and move away from the curb while staying within the lane. If there are many people lower speed even more and move to a different lane if that is an option. Also, when planning the travel route avoid high risk areas as much as possible.

Let's track back even more....all this AV talk is about individual transport. That is really only needed for the last mile. We should invent and implement modes of transportation that make use of shared resources such as rail or other designated means of travel. On main access roads install what amounts to a conveyor belt. Have AI assisted means for easy on/easy off and while on control the traffic flow accordingly. At any given point in time the system knows exactly where every vehicle is and can easily avoid collisions. The self-driving is left to side roads that already are less prone to deadly accidents.

We need a totally different approach to transportation. Stapling computers onto something that did not change much in the past 100 years is just falling way too short.
Whoopty
50%
50%
Whoopty,
User Rank: Ninja
6/28/2016 | 7:49:14 AM
Re: Great headline...
That's what I hope for too. I mean when it comes to important criminal cases or medical matters, we prefer people to address them without emotion and come at them as pragmatically as possible. In those cases, you would hope that the choice would see the car save the most people.

That said, I think these cars are going to be so smart and secure thanks to many other safety factors that this would be a pretty rare ocurrance. If it can calculate that the driver would likely die, the cars will at the very least do their best to protect them, contact emergency services immediately and hell, why not even have a Demolition Man style foam packing in around the driver?
vnewman2
50%
50%
vnewman2,
User Rank: Ninja
6/27/2016 | 10:40:30 PM
Great headline...
I laughed out loud when I read it.  It is quite the quandry, isn't it?  But once you take the human factor out of it by removing emotions, panic, and reflexes - it would seem like the car would always "do the right thing."  That is, depending on what "the right thing" actually is according to whoever is programming it.  
Page 1 / 2   >   >>
2017 State of IT Report
2017 State of IT Report
In today's technology-driven world, "innovation" has become a basic expectation. IT leaders are tasked with making technical magic, improving customer experience, and boosting the bottom line -- yet often without any increase to the IT budget. How are organizations striking the balance between new initiatives and cost control? Download our report to learn about the biggest challenges and how savvy IT executives are overcoming them.
Register for InformationWeek Newsletters
White Papers
Current Issue
Top IT Trends for 2018
As we enter a new year of technology planning, find out about the hot technologies organizations are using to advance their businesses and where the experts say IT is heading.
Video
Slideshows
Twitter Feed
Sponsored Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.
Flash Poll