Geekend: The Ethics Of Making Robots Like People - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
IT Life
Commentary
10/17/2014
01:00 PM
David Wagner
David Wagner
Commentary
Connect Directly
Twitter
RSS
50%
50%

Geekend: The Ethics Of Making Robots Like People

The pinnacle of artificial intelligence is to copy people, but is it ethical?

10 Robots Changing The World
10 Robots Changing The World
(Click image for larger view and slideshow.)

Much praise has been given to how close we are to making robots or software that could pass the Turing Test, a test specifically designed to deceive people into thinking the computer is a person. But is that ethical? Three computer science professors published a paper in the Journal of Experimental & Theoretical Artificial Intelligence that came to the conclusion that we really shouldn't look to build robots that can deceive humans into thinking they're people -- at least not without much more careful consideration.

The three professors examined the ethical realities of multiple situations involving robots, users, and developers. For example, they discussed situations where a developer creates a robot to commit bank fraud by pretending to be a person. This is obviously unethical. But the unethical part is committed by the developer, not the robot.

There are two more interesting questions that arise from the paper. First, if a robot programmed to do only ethical things masquerades as a human, can it still be unethical? Second, can we do unethical things to robots if they are programmed to be human-like? The authors frame their argument around a rather complicated informational ethics concept called metaphysical entropy. Check the paper for a full definition, but the easiest way to define it is to determine whether the "being" of either the person or the robot/computer program is harmed or changed in any way.

[What would your 'bot do? See My Ideal Robot: 10 Must-Have Features.]

If you look at the bank fraud example, the robot isn't harmed or changed in committing the fraud. It is merely doing what it is programmed to do. But let's look at the example of the robot masquerading as a human.

The authors didn't cite this example, but you might recall this summer when a robot telemarketer called Time Magazine editor Michael Scherer and tried to give him a "deal on health insurance." Partway through the conversation, Scherer got suspicious and asked the robot directly whether she was a robot. It actually laughed and denied it. When he asked again, it blamed the bad connection. The robot essentially failed the Turing Test, but what if it hadn't?

Certainly, if Scherer had been interested in health insurance, he would have been making decisions based in part on the idea that a human was on the other end of the line. Does that matter? Some people would certainly say that what you don't know won't hurt you. But when you make a business decision, don't you make it, in part, based on the trust you have with the company and the agent speaking to you? Perhaps you might make a different business decision talking to a computer than you would talking to a person. If you don't know you are talking to a sophisticated program when you make a decision, are you damaged?

The authors give an even more dangerous example. What if a robot looked and talked like a human so much so that you couldn't tell the difference between a robot and a person? And what if there were a fire in a building? And what if the fire department saved the robot before saving the human?

There are even times when robots don't pretend to be human where humans will make critical changes in their behavior or character because of a robot. The example given in the paper is robot pets given to elders in nursing care. Despite being told repeatedly that the pets were robots, many of the patients refused to give them up, saying that the pets "listen to them." In other words, humanizing robots can cause extremely confusing mental reactions. Don't believe me? Think about it the next time you call Siri "she."

The authors concluded that our "beings" or collective moral and intellectual decisions are altered enough by robots masquerading as people that it is unethical and potentially dangerous to attempt, and it increases the chances for evil.

Interestingly enough, there was one example where they did not necessarily find a moral quandary: someone behaving immorally with a robot. They gave two different types of examples. One was the previous example of the bank fraud robot. In that example, the robot is simply designed to do what it does. No damage is done to the robot, because the deception is not to the robot, but to the bank. The developer is being unethical to the bank, not the robot.

In another example, they discuss the user's relationship with a robot. They think the major issue is whether the person knows the robot is a robot. When people know they are dealing with a robot, the authors say, "Current moral thinking of many people is deeply influenced by ethical theories that [assume] that humans are the only moral patients. By observing the non-humanness of the robot, we might apply an ethical analysis that views the robot as a 'thing' with no more ethical significance than our car or vacuum cleaner."

But when the robot deceives the person into thinking it is a person, the authors concentrate on how the deception changes the behavior

Next Page

David has been writing on business and technology for over 10 years and was most recently Managing Editor at Enterpriseefficiency.com. Before that he was an Assistant Editor at MIT Sloan Management Review, where he covered a wide range of business topics including IT, ... View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Previous
1 of 2
Next
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Page 1 / 3   >   >>
Lorna Garey
50%
50%
Lorna Garey,
User Rank: Author
10/29/2014 | 5:32:11 PM
Re: elderly and robot pets
Thank you! I thought I was the only one with that reaction. What would have been the harm in leaving the pet-bots, and what was the administration of the facility thinking when it didn't ensure the seniors wouldn't lose these connections.
kstaron
50%
50%
kstaron,
User Rank: Ninja
10/29/2014 | 1:47:10 PM
elderly and robot pets
Is it just me or does anyone else think taking the little robotic dogs away from the elderly is the unethical behavior here? THere is scientific evidence that having an animal companion in places like hospitals and nursing homes benefit the patients. If the elderly involved connected with the pet, even if it was a machine and they knew that, isn't that connection part of what benefits the humans? At that point taking the robots away is just a little cruel.

The simple questiong for robots is this, assuming they pass the turing test (do they have one to assess if robots can pass as domestic pets?) do we treat it like a robot, or do we treat it like a person? Until we are ready to ethically answer that question, we aren't ready for human like robots.
PedroGonzales
50%
50%
PedroGonzales,
User Rank: Ninja
10/24/2014 | 10:56:57 AM
Re: The problem isn't here - yet
As we know, robots are really good at repetitive and specific tasks, such as an assembly line, supporting people when handling complex equipment.  I don't think robots should be like human; although, many sci fi writers have written about the negative possibilities of robots, such as blade runner and Irobot. 
Sara Peters
50%
50%
Sara Peters,
User Rank: Ninja
10/23/2014 | 4:35:34 PM
Re: The problem isn't here - yet
Do we really want robots to act like humans? Do we need them to? Can't they be good at their own thing, and let us be good at our thing, and have us simply work together? I'm sure that artificial intelligence is a valuable thing, I just don't know if we think enough about what kind of intelligence we need -- what mix of book smarts and people smarts is best?
AndrewR592
50%
50%
AndrewR592,
User Rank: Apprentice
10/20/2014 | 5:38:54 PM
Robots as People
This article is operating under the assumption that robots will always "be" robots; meaning that there will always be that something missing that makes them non-human. For the most part, we defer to their inability to self-replicate, self-repair, and lack of emotions as limiters to being "human" in nature. However, what happens when synthetic lifeforms are capable of these things? If you think about it, we are nothing more than biological computers in self-repairing mobile carriers. What then becomes of the android that can do the same? What happens when we create organic computing, materials that can repair themselves, and computers with learning capabilities? When the created becomes like the creator where is the distinction between the two?

Here is the interesting reality. We are already moving towards those goals. Look at IBM's Watson - beat everyone in Jeopardy. I suppose I would too if I had the entire Internet as my encyclopedia and could access it in mere nanoseconds. Scientists have created materials that can repair themselves. We are growing organs in labs with 3D printers. We have organic LEDs and will soon have organic transitors. As we further decode DNA, we will begin to understand its programming language further and use it to manipulate proteins to do what we want. There is also a scientist who has created synthetic DNA, called XNA. We are modifying viruses to deliver payloads we want without the ugly pathology usually associated with those same viruses. The line between the real and the synthesized is blurring and it is only a matter of time before we have androids like in the "Alien" series or Data from Star Trek. And, the most interesting part in that Star Trek's series was Data's eternal mission - to be more like humans. Up until he got his emotion chip, he was clearly a robot. But, after getting his emotion chip there wasn't much to separate him from real humans. As such, in that series' writing he was regarded as a person. In fact, it was Data's emotion chip that caused him the most dilemmas.

Maybe robots won't want to be like us.
SachinEE
50%
50%
SachinEE,
User Rank: Ninja
10/20/2014 | 2:42:20 PM
Re: Robots like People
That would create all sorts of problems because an ever learning AI can mimic human (and natural) environments to behave like us. It can be moved, get afraid, get angry, and have all sorts of other emotions which otherwise should not be put into a robot.
GAProgrammer
50%
50%
GAProgrammer,
User Rank: Ninja
10/20/2014 | 2:39:08 PM
Re: Interesting points
I agree completely!
SachinEE
100%
0%
SachinEE,
User Rank: Ninja
10/20/2014 | 2:37:26 PM
Re: Interesting points
The gap between the living and the non living would always exist, and no matter how great the robot AI is, you would still feel that distance. Also, a robot will work best without human intervention in its working midway.
GAProgrammer
50%
50%
GAProgrammer,
User Rank: Ninja
10/20/2014 | 2:18:22 PM
Interesting points
A number of Sci-fi movies and books have "put it out there" that even robots with extraordinary AI would suffer many abuses at the hands of humankind. I agree that the separation between man and machine leaves the potential abuses to be perceived no differently than ripping a piece of paper - no harm done to a human, so what do I care?? It has enough intelligence to perform the job, but not so much that it is "alive". See Short Circuit for a great example of that line between "life" and "intelligence".

As for the notes you made in Star Wars, maybe you have a bigger heart or more empathy, but I didn't feel "badly" about them clamping R2D2 or serving Jabba (At least not in a "slavery" kind of way). All for the same reasons as above - they were created to serve biological beings, just like a cell phone, a computer program, or a vacuum cleaner. C3P0 was created as an assistant and translator..take away the anthopomorphic form and you might have one of those in your pocket - live translation of audio and personal organizer. However, I agree that the AI level in SW was such that they did have their own personality, which is where I think the dilemma lies...

It seems a lot of AI now isn't so so much about personality, but making working towards enough to make decisions without human intervention.

Also, I do admit to some kind of connection... I did cringe at the robot being branded. However, I think that it was more about the sound of pain than the fact that some "being" was being hurt.

Regardless, great article and great points. This one really made me think!
TerryB
50%
50%
TerryB,
User Rank: Ninja
10/20/2014 | 1:41:18 PM
Thoughts from Spock
I can hear him now: "Why would you create a machine based on something as flawed as a human?"

I must admit I'd much rather have the Vulcan version. Especially if he knows the nerve pinch to put someone to sleep.
Page 1 / 3   >   >>
News
COVID-19: Using Data to Map Infections, Hospital Beds, and More
Jessica Davis, Senior Editor, Enterprise Apps,  3/25/2020
Commentary
Enterprise Guide to Robotic Process Automation
Cathleen Gagne, Managing Editor, InformationWeek,  3/23/2020
Slideshows
How Startup Innovation Can Help Enterprises Face COVID-19
Joao-Pierre S. Ruth, Senior Writer,  3/24/2020
White Papers
Register for InformationWeek Newsletters
State of the Cloud
State of the Cloud
Cloud has drastically changed how IT organizations consume and deploy services in the digital age. This research report will delve into public, private and hybrid cloud adoption trends, with a special focus on infrastructure as a service and its role in the enterprise. Find out the challenges organizations are experiencing, and the technologies and strategies they are using to manage and mitigate those challenges today.
Video
Current Issue
IT Careers: Tech Drives Constant Change
Advances in information technology and management concepts mean that IT professionals must update their skill sets, even their career goals on an almost yearly basis. In this IT Trend Report, experts share advice on how IT pros can keep up with this every-changing job market. Read it today!
Slideshows
Flash Poll