Why Humans Don't Trust Robots, AI
AI and robots are entering the workplace, but humans don't trust them. Here's why, and what we can do to make them more trustworthy.
These 8 Technologies Could Make Robots Better
These 8 Technologies Could Make Robots Better (Click image for larger view and slideshow.)
Humans don't trust robots and artificial intelligence. While nearly a quarter of American consumers (24%) think that self-driving cars enhance safety because they eliminate human error, that's about as far as we're willing to go. Considering 90% of car accidents are caused by human error, it seems likely that one day robots will be better drivers than humans, yet that doesn't matter to most people.
The issue at hand is called "algorithm aversion" or "algorithm avoidance." It usually only takes one mistake -- or a perception of a mistake -- to get a human to stop trusting a robot. In 2014, Wharton performed a study in which people were rewarded for making good predictions. They were allowed to use their own predictions, or those of an AI. The algorithms were repeatedly shown to be better than human predictions. Despite that, humans would see individual errors and lose faith in the robots, despite multiple failures of their own.
Similar studies have been shown with robots serving up wine, medical advice, and stock picks. Repeatedly, humans would lose faith in the robot, no matter how poorly their picks performed in comparison to the robot's. And the robot doesn't have to do anything wrong. We often mistrust them simply for reacting to something we don't see ourselves, or for acting on rules we have not been made privy to.
[ Heck, some people are convinced AI is evil. Read No, AI Won't Kill Us All. ]
This is a problem as you increasingly bring AI and robots into the workplace. Or if you plan on installing AI into products you intend to sell. How do you increase trust in AI or robots so that workers accept them?
Strangely enough, the answer may be to make robots look less confident. Robots that hesitate or look confused gain trust from people, rather than lose it. A study at University of Massachusetts at Lowell asked people to help robots through a slalom course. They had the choice to guide them with a joystick, let the robot do it, or a combination. The robot was considerably faster in automated mode but, unbeknownst to the study participants, the robots were programmed to make mistakes. When the robot made mistakes, the study participants were likely to give up on the robot. But some of the robots in the study were programmed to express doubt. When they "weren't sure" which way to go they would go from a happy face to a sad face. When the robot showed doubt, humans were more likely to trust it to figure things out itself.
These small human touches have been shown specifically to help people trust self-driving cars. Two studies -- one from the University of Eindhoven in The Netherlands, and another by Northwestern, University of Chicago, and University of Connecticut -- have tried to add small bits of humanity to self-driving cars. The Eindhoven study created a driver called Bob that has human-like facial expressions and head movements. In the other study, a driving simulator featured a talking driver named Iris that was a friendly female voice. People were more likely to trust Iris than an unnamed machine.
Little bits of humanity, especially friendliness and self-doubt, create more trust in people, but there is such a thing as going to too far. Make things too human and people get frightened. Generally, people don't like robots that are "too real."
Obviously, the issue of AI is going to be touchy in the next decade. As we introduce robots to the workplace, it isn't merely about trusting their judgment, but also job loss, morale issues, and bruised egos. But until people know they can trust an AI, you can't really handle the rest. So, figuring out how to humanize your AI for your team is a good first start.
About the Author
You May Also Like