IoT
IoT
IT Life
Commentary
6/22/2015
06:06 PM
David Wagner
David Wagner
Commentary
Connect Directly
Twitter
RSS
100%
0%

Why Humans Don't Trust Robots, AI

AI and robots are entering the workplace, but humans don't trust them. Here's why, and what we can do to make them more trustworthy.

These 8 Technologies Could Make Robots Better
These 8 Technologies Could Make Robots Better
(Click image for larger view and slideshow.)

Humans don't trust robots and artificial intelligence. While nearly a quarter of American consumers (24%) think that self-driving cars enhance safety because they eliminate human error, that's about as far as we're willing to go. Considering 90% of car accidents are caused by human error, it seems likely that one day robots will be better drivers than humans, yet that doesn't matter to most people.

The issue at hand is called "algorithm aversion" or "algorithm avoidance." It usually only takes one mistake -- or a perception of a mistake -- to get a human to stop trusting a robot. In 2014, Wharton performed a study in which people were rewarded for making good predictions. They were allowed to use their own predictions, or those of an AI. The algorithms were repeatedly shown to be better than human predictions. Despite that, humans would see individual errors and lose faith in the robots, despite multiple failures of their own.

Similar studies have been shown with robots serving up wine, medical advice, and stock picks. Repeatedly, humans would lose faith in the robot, no matter how poorly their picks performed in comparison to the robot's. And the robot doesn't have to do anything wrong. We often mistrust them simply for reacting to something we don't see ourselves, or for acting on rules we have not been made privy to.

[ Heck, some people are convinced AI is evil. Read No, AI Won't Kill Us All. ]

This is a problem as you increasingly bring AI and robots into the workplace. Or if you plan on installing AI into products you intend to sell. How do you increase trust in AI or robots so that workers accept them?

Strangely enough, the answer may be to make robots look less confident. Robots that hesitate or look confused gain trust from people, rather than lose it. A study at University of Massachusetts at Lowell asked people to help robots through a slalom course. They had the choice to guide them with a joystick, let the robot do it, or a combination. The robot was considerably faster in automated mode but, unbeknownst to the study participants, the robots were programmed to make mistakes. When the robot made mistakes, the study participants were likely to give up on the robot. But some of the robots in the study were programmed to express doubt. When they "weren't sure" which way to go they would go from a happy face to a sad face. When the robot showed doubt, humans were more likely to trust it to figure things out itself.

(Image: Santos 'Grim Santo' Gonzalez via Flickr)

(Image: Santos "Grim Santo" Gonzalez via Flickr)

These small human touches have been shown specifically to help people trust self-driving cars. Two studies -- one from the University of Eindhoven in The Netherlands, and another by Northwestern, University of Chicago, and University of Connecticut -- have tried to add small bits of humanity to self-driving cars. The Eindhoven study created a driver called Bob that has human-like facial expressions and head movements. In the other study, a driving simulator featured a talking driver named Iris that was a friendly female voice. People were more likely to trust Iris than an unnamed machine.

Little bits of humanity, especially friendliness and self-doubt, create more trust in people, but there is such a thing as going to too far. Make things too human and people get frightened. Generally, people don't like robots that are "too real."

Obviously, the issue of AI is going to be touchy in the next decade. As we introduce robots to the workplace, it isn't merely about trusting their judgment, but also job loss, morale issues, and bruised egos. But until people know they can trust an AI, you can't really handle the rest. So, figuring out how to humanize your AI for your team is a good first start.

David has been writing on business and technology for over 10 years and was most recently Managing Editor at Enterpriseefficiency.com. Before that he was an Assistant Editor at MIT Sloan Management Review, where he covered a wide range of business topics including IT, ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Page 1 / 2   >   >>
kstaron
50%
50%
kstaron,
User Rank: Ninja
6/25/2015 | 11:45:42 AM
expressing doubt
I wonder if the reason we like them better when the robots express doubt, is that is shows it is thinking through the process not just doing a calculation. I wonder if I would trust my Garmin more if it started to say "turn right in 500 ft, I think." (Actually in its case I just want it to tell me while I can still make a lane change.)

I'm not sure when I will really trust AI. In the games I play bad AI on the character's part usually leaves me saying thing unfit to print.
impactnow
50%
50%
impactnow,
User Rank: Ninja
6/23/2015 | 7:57:45 PM
Re: AI
I don't think robots are that far away, they have been used for years in manufacturing and other repetitive tasks.3D coupled with interactive self service can make robots more of a reality. While they can always be prone to hacking all of our technology in use is prone to hacking, this shouldn't delay their development and evolution.
SunitaT0
50%
50%
SunitaT0,
User Rank: Ninja
6/23/2015 | 2:43:06 PM
Re: AI
I would not get too excited because there is still a long way to go. unless robots are made secure they will be prone to hacking and misuse.there is still a problem of synchronisingrobotic thought processes to a digital cerebrum. robots are also prone to power failures which can cause different deduction outputs.
Ariella
50%
50%
Ariella,
User Rank: Author
6/23/2015 | 12:45:23 PM
Re: AI
@David yeah, wanting to wipe out all humanity is a problem, even with the best of intentions. 
David Wagner
50%
50%
David Wagner,
User Rank: Strategist
6/23/2015 | 12:39:35 PM
Re: Good Reason to Fear
@Gary_el- I'm not sure i buy the idea that they concentrate wealth in the hands that control them. You could just as easily look at robots and AI as democratizing. They make decisions without bias. They are already getting affordable for many people, especially those that operate in the cloud. If the power is available to everyone it isn't as frightening.
David Wagner
50%
50%
David Wagner,
User Rank: Strategist
6/23/2015 | 12:37:14 PM
Re: AI
@broadway0474- The thing is that perfection scares humans, too. Humans don't trust AIs that work in ways they don't understand. And I think perfection requires that. 
David Wagner
50%
50%
David Wagner,
User Rank: Strategist
6/23/2015 | 12:34:38 PM
Re: AI
@ariella- Funny enough, I thought the most sympathetic character in Avengers was Ultron. Tony Stark was a paranoid fool. Captain America had reduced his humanity to just being a soldier which means he's next to screw up. Black Widow wanted out. Thor has always just been a caricature. Hulk just smahes. Ultron was trying to remake the world as a better place as he saw it rather than trying to keep it from changing. 

In the end, of course, he's wrong. But like all good bad guys his motivations were understandable. 

Funny enough, that sort of "human" AI is what studies show might be more trustworthy. Granted, without the evil part. :)
Ariella
50%
50%
Ariella,
User Rank: Author
6/23/2015 | 8:45:19 AM
Re: AI
@Broadway0474 that's an interesting perspective. I haven't thought of looking at it that way, though it does make sense.
Whoopty
50%
50%
Whoopty,
User Rank: Ninja
6/23/2015 | 7:12:12 AM
Defer
I think the key to trusting an AI will be the feeling of control. If occasionally it defers to our judgement more people will trust them, as it will be clear we still have some sort of handle on the situation. The real problem with a lot of AI at the moment is their lack of nuance and intuition. That will get better with time, but I think they will always do things differently from us as they think very differently. 
SachinEE
50%
50%
SachinEE,
User Rank: Ninja
6/23/2015 | 3:01:54 AM
Re: AI
@ariella just like I said movies and documentaries like that will instill fear in people rather than prepare the market for accepting automated technology. There is also the question about losing jobs. I don't know how the people would react to a fully automated society.
Page 1 / 2   >   >>
Register for InformationWeek Newsletters
White Papers
Current Issue
Top IT Trends to Watch in Financial Services
IT pros at banks, investment houses, insurance companies, and other financial services organizations are focused on a range of issues, from peer-to-peer lending to cybersecurity to performance, agility, and compliance. It all matters.
Video
Slideshows
Twitter Feed
InformationWeek Radio
Sponsored Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.