Will Our Love Of 'Imperfect' Robots Harm Us?
Flawed robots make people more comfortable in certain settings, which is fine. But what happens when we need robots to be perfect?
These 8 Technologies Could Make Robots Better
These 8 Technologies Could Make Robots Better (Click image for larger view and slideshow.)
We are drawn to robots that have the same kind of cognitive biases and flaws that we do, according to a report from researchers at the University of Lincoln in the UK. Because of this, we may need to consider making robots less perfect in order to build positive, long-term relationships between humans and robots.
This is an especially important finding considering Gartner recently predicted that by the end of 2018, 3 million people worldwide will have a robot for a boss. If we will soon be interacting with robots at work, even having some of them ordering us around, is it a good idea to make them less perfect to make us comfortable?
The University of Lincoln researchers, who presented their findings at the International Conference on Intelligent Robots and Systems (IROS) conference in Hamburg earlier this month, didn't tackle that specific question. Instead, they focused on robots used in education for children on the autism spectrum and those that support caregivers for the elderly.
The researchers introduced the cognitive biases of forgetfulness and "empathy gap" into two different robots: the ERWIN (Emotional Robot with Intelligent Network), which can express five basic emotions, and the small yellow robot toy Keepon that's been used to study child social development. In both instances, half the interactions with these robots included cognitive biases and half of them did not.
Overwhelmingly, human subjects said they enjoyed a more meaningful interaction with the robots when machines made mistakes.
"The cognitive biases we introduced led to a more humanlike interaction process," Mriganka Biswas, the lead researcher explained in a press release. "We monitored how the participants responded to the robots and overwhelmingly found that they paid attention for longer and actually enjoyed the fact that a robot could make common mistakes, forget facts and express more extreme emotions, just as humans can."
He went on to say something a little more controversial, in my mind: "As long as a robot can show imperfections which are similar to those of humans during their interactions, we are confident that long-term human-robot relations can be developed."
Granted, this study was on children and the elderly. The needs of these groups are clearly different from those of people in an office setting. At the same time, the notion that humans enjoy seeing flaws and biases in robots because it makes them seem more like us is worrisome.
Some humans have a bias toward racism. No doubt a racist robot would be pleasing to those people. Sure, that's an extreme example. Cognitive biases take all forms, but we try to train ourselves out of as many as possible in a business setting. For instance, many people have decision-making cognitive biases like those that cause us to go with heuristic shortcuts (or gut feelings) that lead to fast, but not always accurate, decisions. Do we want robots that shoot from the hip (or look like they do)? Aren't we trying to run data-driven businesses?
For most people, exposure to robots has been limited to science fiction. We're willing to accept the android Lieutenant Commander Data from Star Trek, because he has no emotions. We're OK with him remembering everything and being faster and stronger because he's lacking something essentially human. We can handle C-3PO from the Star Wars movie franchise, because he's a coward and a bumbling fool, even though he is fluent in more than 6 million forms of communication and can calculate probability faster than humans. These flaws allow us to accept our weaknesses in front of machines that are potentially superior to us.
[What's wrong with robot masters anyway? Read 10 Reasons Why Robots Should Rule the World.]
What happens in a business setting? Do we keep the flaws in robots to make people happy or do we learn to accept our own inadequacies in the name of better business? We're not there yet. Robots aren't superior to humans.
But if Gartner is right, it isn't long until a robot gives you an order. Will you trust the order? Will you take its judgment over your own? Will it need to pretend to forget things just so you can accept its orders? Long before we have to worry about robots being our new masters, we need to think about how we will work together, side-by-side with companion robots. Daryl Plummer, a Gartner vice president and Fellow said, "In the next few years, relationships between people and machines will go from cooperative, to co-dependent to competitive."
If we can't handle being cooperative without having to dumb down robots, how are we going to handle being competitive with them?
About the Author
You May Also Like
2024 InformationWeek US IT Salary Report
May 29, 20242022 State of ITOps and SecOps
Jun 21, 2022