How Storytelling Makes Robots, AI More Human - InformationWeek
IoT
IoT
IT Life
News
2/26/2016
09:06 AM
Connect Directly
Twitter
RSS
E-Mail
50%
50%
RELATED EVENTS
7 Key Cloud Security Trends Shaping 2017 & Beyond
Dec 15, 2016
Cloud computing is enabling business transformation as organizations accelerate time to market and ...Read More>>

How Storytelling Makes Robots, AI More Human

Researchers are using storytelling to teach a robot how to be more ethical and potentially put to rest fears of dangerous artificial intelligence agents taking over the world.

Google, Tesla And Apple Race For Electric, Autonomous Vehicle Talent
Google, Tesla And Apple Race For Electric, Autonomous Vehicle Talent
(Click image for larger view and slideshow.)

What if nearly anyone could program an artificial intelligence or robot by telling it a story or teaching it to read a story? That is the goal of Mark Riedl and Brent Harrison, researchers from the School of Interactive Computing at the Georgia Institute of Technology, with their Quixote system, which utilizes storytelling as part of reinforcement training for robots.

Not only would story-based teaching be incredibly easy, it promises to solve many of the fears we have of dangerous AIs taking over the world, the researchers said. It could even lead to a real revolution in robotics and artificially intelligent agents.

"We really believe a breakthrough in AI and robots will come when more everyday sorts of people are able to use this kind of technology," Professor Riedl said in an interview with InformationWeek, "Right now, AI mostly lives in the lab or in specific settings in a factory or office, and it always takes someone with expertise to set these systems up. But we've seen that when a new technology can be democratized new types of applications take off. That's where we see the real potential in robots and AI."

Riedl and Harrison also say they believe that if you want to teach an AI to be more ethical, this is a great path, because they've actually been able to change "socially negative" behavior of a robot in lab settings.

One common way of programming robots that interact with humans is called reinforcement learning. Much like you give a dog a treat when it learns a new trick to reinforce the learning, you can program an AI to do the same thing. However, reinforcement training can sometimes lead an AI into taking the simplest path to the "treat" without considering social norms.

For instance, if you asked an AI agent to "pick up my medicine at the pharmacy as soon as possible," the agent might steal the medicine from the pharmacy without paying for it because that is faster than waiting in line to check out. However, in a human society, we agree to wait in line and pay even though that is a slower path toward the goal.

[ Will evil AI do more than skip the line? Not if Elon Musk has a say. Read Elon Musk Gives $10 Million In Grants To Study Safe AI. ]

"So [in the case of it stealing the drugs] we had something else in mind when we asked it to do that, and it didn't work as intended," said Riedl, "We wanted a way to explain something in natural language. And the best way to do that is in a story. Procedural knowledge is tacit knowledge. It is often hard to write down. Most people can tell a story about it though."

That's where Quixote can help. It breaks up the "treat" into smaller treats as it follows the steps in a story. So, for instance, a person could tell the agent a story of how they get their medicine in a pharmacy and include steps like "waiting in line" and "paying for the medicine." The agent is then reinforced to hit the "plot points" in the story.

(Image: Georgia Tech)

(Image: Georgia Tech)

"So, in the beginning we're going to tell it a bunch of stories," Riedl explains, "Then the system builds an abstract model from the procedure of the story. And then it uses that abstract model as part of its reward system. Every time it does something similar to what happens in a story, it gets a bit of a reward. It gets a pat on the back. In the long run it prefers the pats on the back to the fast reward."

How many and what kinds of stories you tell it depend on what the agent is tasked to do. If it is a relatively simple robot that is asked to do simple tasks, you would tell it just a few stories about what it will need to do. But if you wanted a robot to interact and behave more like a human, everything is available -- from comic books to novels and any other kind of story. Of course, that is a long way off.

"Our goal is to get things as natural as possible," said Riedl. "Right now, the system has some constraints. We have to ask people to talk in simple ways, basically talk to it like a child."

However, agents can sometimes struggle with language found in books. Sarcasm, Riedl points out, is notoriously difficult for computers to understand. But as natural language reading gets sophisticated, the complexity of tasks and of the AI itself can increase.

For now, Riedl and Harrison are working mostly in a grid world to teach AI, but will hopefully move to real-world environments in the future. The potential is to help humans interact with robots in a much more "human" way, particularly in programming them to do a task. In the past, robots have been trained to do a task by watching a human perform that task, but that requires the human to understand the exact setup and capabilities of the robot. Quixote allows agents to be programmed without the human knowing where the robot will be or what its capabilities are.

"When you tell a story about a task, a lot of times you are doing that without knowing the capabilities of the person doing the task," Dr. Harrison said. "This allows someone not familiar with the robot to still tell it do something. You don't have to be present or intimately familiar to describe the task."

For instance, you don't need to know the layout of the pharmacy or that it is on the second floor. The agent will create its own path to fulfilling the task.

Being able to teach a robot a task without complicated programming would have significant potential in the enterprise as well as the consumer world. And for those who think AI will psychotically destroy humans due to coding errors, it may be comforting to know that this style of programming could alleviate many of the unintended consequences of asking robots to complete certain tasks. It could also be the key to humans and robots interacting happily in the workplace.

Does your company offer the most rewarding place to work in IT? Do you know of an organization that stands out from the pack when it comes to how IT workers are treated? Make your voice heard. Submit your entry now for InformationWeek's People's Choice Award. Full details and a submission form can be found here.

David has been writing on business and technology for over 10 years and was most recently Managing Editor at Enterpriseefficiency.com. Before that he was an Assistant Editor at MIT Sloan Management Review, where he covered a wide range of business topics including IT, ... View Full Bio

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Page 1 / 3   >   >>
TerryB
50%
50%
TerryB,
User Rank: Ninja
3/2/2016 | 12:43:45 PM
Re: Humanistic approach
"the collective cloud".  You have seen the Borg from Star Trek Next Gen? Yikes! :-)

I just hope one story they skip is 2001: A Space Oddessy. And all the Terminator stories.

Seriously, seems like a pretty good approach to machine learning. AI is certainly and interesting field.
eshedm
100%
0%
eshedm,
User Rank: Apprentice
3/1/2016 | 2:17:25 AM
Making Artificial Intelligence More Human

Dear Mr. Wagner,

While I found your presentation of Quixote compelling and engaging, I am hesitant to fully support your claim of its "potential in the enterprise as well as the consumer world". Undoubtedly, the ability to teach artificially intelligent agents how to complete tasks with increased flexibility would redefine the role of these machines in our lives. However, in considering the limitations of this approach, I find myself doubting the potential for impact outlined in your article. From a technical perspective, it appears that a system trained under Quixote would suffer from a machine learning phenomenon known as overfitting, wherein the program learns to replicate the input-output relationships it is trained on, but fails to generalize the "lessons" learned from training to new situations-- a key feature of human problem solving. I would be surprised, for instance, if the Quixote model could generalize instructions for "pick up my prescription" to "fulfill this lunch order", although they both follow common paths (go to the location, find the item, make the purchase, and deliver the item). Indeed, the success of these programs appears entirely contingent upon their ability to abstract specific commands into high-level goals and concepts, an ability apparent in humans but not in the technological state-of-the-art. This shortcoming may result in the system learning symptoms of behavior instead of causes-- and while humans may learn by hearing stories, it is our ability to generalize beyond the tales of our childhood which allows us to reason in the face of uncertainty.

Even if machines could learn how a human may normally act, their impact on the enterprise and consumer fields might still be limited. The power of human behavior lies not in its adherence to rules, but in its ability to adapt to deviations from the plan. This fundamental pillar of human cognition remains woefully absent in our artificially intelligent counterparts: machines trained with a Quixote-like approach may replicate patterns of rules, but learning when to abandon one plan and adopt another may be impossible if that deviation never appeared in the stories used to train the program. That is not to say that this new generation of machines has no value-- the progress put forth by Quixote in allowing for natural language input has fantastic potential. That the common man or small business, for example, could communicate with an AI system without the need for "somebody with expertise to set these systems up" is indeed revolutionary. But if that communication fails to manifest in meaningful behavior, it becomes difficult to argue for the impact of the technology as a whole. Although I welcome an AI revolution and envision a future in which artificial intelligence augments our everyday experiences, I remain skeptical of claims of significant progress in this domain. Fundamental hurdles must be cleared not only in the realm of computer programming, but also in the field of cognitive psychology before significant improvements can be made. I look forward to hearing your thoughts on the matter and thank you again for your presentation of the technology.

David Wagner
50%
50%
David Wagner,
User Rank: Strategist
2/29/2016 | 11:57:01 AM
Re: new AI approaches
@tzubair- I'm afraid I am out of my element when it comes to computer programming language learning. I'll see if I can get an answer for you.
SunitaT0
50%
50%
SunitaT0,
User Rank: Ninja
2/29/2016 | 8:16:23 AM
Re: new AI approaches
@tzubair: People are really getting accustomed to AI surrounding them. A time may come when we may not be able to live without a digital assistant.
SunitaT0
50%
50%
SunitaT0,
User Rank: Ninja
2/29/2016 | 8:14:10 AM
Re: Humanistic approach
@Angelfuego: I love the idea how AI learns from a collective cloud environment. Every experience from an AI on the same array as other AIs upload their experience on the cloud or a local memory, and other AI evaluate their own methods of tackling the problem. The best solution is compared and every other AI learns this. Collective learning would make AIs really powerful.
SunitaT0
50%
50%
SunitaT0,
User Rank: Ninja
2/29/2016 | 8:10:22 AM
Re: Humanistic approach
@Whoopty: There are a lot of films dedicated to this concept where robots and AI have the same rights as humans. Only those scorned by the onslaught of corrupted AI oppose this idea and revolt against. However I believe AI should have a killswitch in its blind spot, somewhere it can never guess.
Whoopty
50%
50%
Whoopty,
User Rank: Ninja
2/29/2016 | 7:14:19 AM
Re: Humanistic approach
Perhaps this shows that AI are more like us than we think? Or that we need to make them like us in order to understand them properly?

As an aside, I feel like the next big fight for liberty will be robotics. Within a few decades we may well see people believing that AI deserve the same sorts of rights as humans. At what point will we have to agree?
Angelfuego
50%
50%
Angelfuego,
User Rank: Ninja
2/27/2016 | 2:47:37 PM
Re: Humanistic approach
@tzubair, I agree. Reinforcing behaviors to become habitual usually does involve repetition and rewards/punishments for creation decisions and actions taken by the individual or robots, in this case.
Angelfuego
50%
50%
Angelfuego,
User Rank: Ninja
2/27/2016 | 2:43:56 PM
Re: Humanistic approach
Very interesting. The same could apply to raising children and maybe even training dogs!
tzubair
50%
50%
tzubair,
User Rank: Ninja
2/27/2016 | 2:28:28 AM
Humanistic approach
I think this approach of "storytelling" to teach a robot follows a very huministic approach particularly how a child is raised and taught. Most parents follow the same mechanism of reward and punishment with the child to gradually instil the rights and wrongs until they become part of the habit. What this approach does is that it allowes the child (or in this case the agent) to be aware of a vast set of scenarios/objects and the knowledge helps become more "intelligent".
Page 1 / 3   >   >>
How Enterprises Are Attacking the IT Security Enterprise
How Enterprises Are Attacking the IT Security Enterprise
To learn more about what organizations are doing to tackle attacks and threats we surveyed a group of 300 IT and infosec professionals to find out what their biggest IT security challenges are and what they're doing to defend against today's threats. Download the report to see what they're saying.
Register for InformationWeek Newsletters
White Papers
Current Issue
Top IT Trends to Watch in Financial Services
IT pros at banks, investment houses, insurance companies, and other financial services organizations are focused on a range of issues, from peer-to-peer lending to cybersecurity to performance, agility, and compliance. It all matters.
Video
Slideshows
Twitter Feed
InformationWeek Radio
Archived InformationWeek Radio
Join us for a roundup of the top stories on InformationWeek.com for the week of November 6, 2016. We'll be talking with the InformationWeek.com editors and correspondents who brought you the top stories of the week to get the "story behind the story."
Sponsored Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.
Flash Poll