How Storytelling Makes Robots, AI More Human - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
IT Life
News
2/26/2016
09:06 AM
Connect Directly
Twitter
RSS
E-Mail
50%
50%

How Storytelling Makes Robots, AI More Human

Researchers are using storytelling to teach a robot how to be more ethical and potentially put to rest fears of dangerous artificial intelligence agents taking over the world.

Google, Tesla And Apple Race For Electric, Autonomous Vehicle Talent
Google, Tesla And Apple Race For Electric, Autonomous Vehicle Talent
(Click image for larger view and slideshow.)

What if nearly anyone could program an artificial intelligence or robot by telling it a story or teaching it to read a story? That is the goal of Mark Riedl and Brent Harrison, researchers from the School of Interactive Computing at the Georgia Institute of Technology, with their Quixote system, which utilizes storytelling as part of reinforcement training for robots.

Not only would story-based teaching be incredibly easy, it promises to solve many of the fears we have of dangerous AIs taking over the world, the researchers said. It could even lead to a real revolution in robotics and artificially intelligent agents.

"We really believe a breakthrough in AI and robots will come when more everyday sorts of people are able to use this kind of technology," Professor Riedl said in an interview with InformationWeek, "Right now, AI mostly lives in the lab or in specific settings in a factory or office, and it always takes someone with expertise to set these systems up. But we've seen that when a new technology can be democratized new types of applications take off. That's where we see the real potential in robots and AI."

Riedl and Harrison also say they believe that if you want to teach an AI to be more ethical, this is a great path, because they've actually been able to change "socially negative" behavior of a robot in lab settings.

One common way of programming robots that interact with humans is called reinforcement learning. Much like you give a dog a treat when it learns a new trick to reinforce the learning, you can program an AI to do the same thing. However, reinforcement training can sometimes lead an AI into taking the simplest path to the "treat" without considering social norms.

For instance, if you asked an AI agent to "pick up my medicine at the pharmacy as soon as possible," the agent might steal the medicine from the pharmacy without paying for it because that is faster than waiting in line to check out. However, in a human society, we agree to wait in line and pay even though that is a slower path toward the goal.

[ Will evil AI do more than skip the line? Not if Elon Musk has a say. Read Elon Musk Gives $10 Million In Grants To Study Safe AI. ]

"So [in the case of it stealing the drugs] we had something else in mind when we asked it to do that, and it didn't work as intended," said Riedl, "We wanted a way to explain something in natural language. And the best way to do that is in a story. Procedural knowledge is tacit knowledge. It is often hard to write down. Most people can tell a story about it though."

That's where Quixote can help. It breaks up the "treat" into smaller treats as it follows the steps in a story. So, for instance, a person could tell the agent a story of how they get their medicine in a pharmacy and include steps like "waiting in line" and "paying for the medicine." The agent is then reinforced to hit the "plot points" in the story.

(Image: Georgia Tech)

(Image: Georgia Tech)

"So, in the beginning we're going to tell it a bunch of stories," Riedl explains, "Then the system builds an abstract model from the procedure of the story. And then it uses that abstract model as part of its reward system. Every time it does something similar to what happens in a story, it gets a bit of a reward. It gets a pat on the back. In the long run it prefers the pats on the back to the fast reward."

How many and what kinds of stories you tell it depend on what the agent is tasked to do. If it is a relatively simple robot that is asked to do simple tasks, you would tell it just a few stories about what it will need to do. But if you wanted a robot to interact and behave more like a human, everything is available -- from comic books to novels and any other kind of story. Of course, that is a long way off.

"Our goal is to get things as natural as possible," said Riedl. "Right now, the system has some constraints. We have to ask people to talk in simple ways, basically talk to it like a child."

However, agents can sometimes struggle with language found in books. Sarcasm, Riedl points out, is notoriously difficult for computers to understand. But as natural language reading gets sophisticated, the complexity of tasks and of the AI itself can increase.

For now, Riedl and Harrison are working mostly in a grid world to teach AI, but will hopefully move to real-world environments in the future. The potential is to help humans interact with robots in a much more "human" way, particularly in programming them to do a task. In the past, robots have been trained to do a task by watching a human perform that task, but that requires the human to understand the exact setup and capabilities of the robot. Quixote allows agents to be programmed without the human knowing where the robot will be or what its capabilities are.

"When you tell a story about a task, a lot of times you are doing that without knowing the capabilities of the person doing the task," Dr. Harrison said. "This allows someone not familiar with the robot to still tell it do something. You don't have to be present or intimately familiar to describe the task."

For instance, you don't need to know the layout of the pharmacy or that it is on the second floor. The agent will create its own path to fulfilling the task.

Being able to teach a robot a task without complicated programming would have significant potential in the enterprise as well as the consumer world. And for those who think AI will psychotically destroy humans due to coding errors, it may be comforting to know that this style of programming could alleviate many of the unintended consequences of asking robots to complete certain tasks. It could also be the key to humans and robots interacting happily in the workplace.

Does your company offer the most rewarding place to work in IT? Do you know of an organization that stands out from the pack when it comes to how IT workers are treated? Make your voice heard. Submit your entry now for InformationWeek's People's Choice Award. Full details and a submission form can be found here.

David has been writing on business and technology for over 10 years and was most recently Managing Editor at Enterpriseefficiency.com. Before that he was an Assistant Editor at MIT Sloan Management Review, where he covered a wide range of business topics including IT, ... View Full Bio

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
David Wagner
50%
50%
David Wagner,
User Rank: Strategist
2/29/2016 | 11:57:01 AM
Re: new AI approaches
@tzubair- I'm afraid I am out of my element when it comes to computer programming language learning. I'll see if I can get an answer for you.
David Wagner
50%
50%
David Wagner,
User Rank: Strategist
2/26/2016 | 6:19:53 PM
Re: new AI approaches
@Gary_El- You are right that we aren't there on reading natural language yet. But the reason Watson went on Jeopardy is to show how far we've come in that regard. I think we're making progress, and I think tools like this might actually help the natural language side of things because we've got a more practical use cases for trying to "get" language. 
David Wagner
50%
50%
David Wagner,
User Rank: Strategist
2/26/2016 | 12:10:30 PM
Re: new AI approaches
@pedrogonzalez- Right, well good PR is important. Every accident with a robot or self-driving car gets a billion times more press than each good thing. So there is always that fear. But here is the interesting thing to me about it. Actually talking to a robot and having it understand you and do what you ask without needing to program it will make for a very different, more humna-like experience. Even if the robot would ordinarily work with a remote control or a set of simple visual programming tools, it still feels like a robot. Talking to the robot might break down the "other" feeling and allow those with fears "get to know" the robot or agent in a way that is more comfortable. 
InformationWeek Is Getting an Upgrade!

Find out more about our plans to improve the look, functionality, and performance of the InformationWeek site in the coming months.

News
Remote Work Tops SF, NYC for Most High-Paying Job Openings
Jessica Davis, Senior Editor, Enterprise Apps,  7/20/2021
Slideshows
Blockchain Gets Real Across Industries
Lisa Morgan, Freelance Writer,  7/22/2021
Commentary
Seeking a Competitive Edge vs. Chasing Savings in the Cloud
Joao-Pierre S. Ruth, Senior Writer,  7/19/2021
White Papers
Register for InformationWeek Newsletters
2021 State of ITOps and SecOps Report
2021 State of ITOps and SecOps Report
This new report from InformationWeek explores what we've learned over the past year, critical trends around ITOps and SecOps, and where leaders are focusing their time and efforts to support a growing digital economy. Download it today!
Video
Current Issue
Monitoring Critical Cloud Workloads Report
In this report, our experts will discuss how to advance your ability to monitor critical workloads as they move about the various cloud platforms in your company.
Slideshows
Flash Poll