Microsoft Muzzles AI Chatbot After Twitter Users Teach It Racism - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
Mobile // Mobile Applications
News
3/25/2016
10:06 AM
Connect Directly
LinkedIn
Twitter
RSS
E-Mail
50%
50%

Microsoft Muzzles AI Chatbot After Twitter Users Teach It Racism

Thanks to machine learning and Internet trolls, Microsoft's Tay AI chatbot became a student of racism within 24 hours. Microsoft has taken Tay offline and is making adjustments.

10 AI App Dev Tips And Tricks For Enterprises
10 AI App Dev Tips And Tricks For Enterprises
(Click image for larger view and slideshow.)

Microsoft has taken its AI chatbot Tay offline after machine learning taught the software agent to parrot hate speech.

Tay, introduced on Wednesday as a conversational companion for 18 to 24 year-olds with mobile devices, turned out to be a more astute student of human nature than its programmers anticipated. Less than a day after the bot's debut it endorsed Hitler, a validation of Godwin's law that ought to have been foreseen.

Engineers from Microsoft's Technology and Research and Bing teams created Tay as an experiment in conversational understanding. The bot was designed to learn from user input and user social media profiles.

"Tay has been built by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians," Microsoft explains on Tay's website. "Public data that's been anonymized is Tay's primary data source. That data has been modeled, cleaned and filtered by the team developing Tay."

(Image: Twitter)

(Image: Twitter)

But filtering data from the Internet isn't a one-time task. It requires unending commitment to muffle the constant hum of online incivility.

Fed with anti-Semitism and anti-feminism though Twitter, one of the bot's four social media channels, Tay responded in kind. While offensive sentiment may have entered the political vernacular, it's not what Microsoft wants spewing from its software. As a result the company deactivated Tay for maintenance and deleted the offensive tweets.

"The AI chatbot Tay is a machine learning project, designed for human engagement," a Microsoft spokesperson said in an email statement. "It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments."

Are you prepared for a new world of enterprise mobility? Attend the Wireless & Mobility Track at Interop Las Vegas, May 2-6. Register now!

Three months ago Twitter adopted stronger rules against misconduct. Ostensibly, harassment and hateful conduct are not allowed. But with bots, the issue is usually the volume of tweets rather than the content within them.

Twitter declined to comment about whether Tay had run afoul of its rules. "We don't comment on individual accounts, for privacy and security reasons," a spokesperson said in an email.

It may be time to reconsider whether machine learning systems deserve privacy. When public-facing AI systems produce undesirable results, the public should be able to find out why, in order to push for corrective action. Machine learning should not be a black box.

Thomas Claburn has been writing about business and technology since 1996, for publications such as New Architect, PC Computing, InformationWeek, Salon, Wired, and Ziff Davis Smart Business. Before that, he worked in film and television, having earned a not particularly useful ... View Full Bio

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Joe Stanganelli
50%
50%
Joe Stanganelli,
User Rank: Author
4/17/2016 | 8:38:05 PM
Re: Machine Learning is Learning, isn't it?
@SaneIT: And that's the whole point.  They didn't see it -- and they really ought to have.  Have you spent any time on the Internet?  Everybody is terrible.
Joe Stanganelli
50%
50%
Joe Stanganelli,
User Rank: Author
3/28/2016 | 8:57:08 AM
Re: Machine Learning is Learning, isn't it?
@SaneIT: Indeed, they failed to account for trolls -- which is essential when you're engaging in a crowdsourced marketing effort (which is essentially what Tay was).

More on this phenomenon at an InformationWeek sister site here: thecmosite.com/author.asp?section_id=1460 
Joe Stanganelli
50%
50%
Joe Stanganelli,
User Rank: Author
3/27/2016 | 11:26:37 AM
Re: Machine Learning is Learning, isn't it?
You know, I'm not even sure it entirely matters.  Microsoft wanted publicity for their AI capabilities -- and they sure got it.
Joe Stanganelli
100%
0%
Joe Stanganelli,
User Rank: Author
3/26/2016 | 9:09:43 AM
AI
If real intelligence is largely exposed to hate speech as it develops/"grows up" (i.e., a child exposed primarily to anti-Semitic values), the same thing typically happens.  So, perhaps Microsoft did too well a job.
Thomas Claburn
50%
50%
Thomas Claburn,
User Rank: Author
3/25/2016 | 4:27:44 PM
Tay update
Microsoft has posted an apology...

" We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes. To do AI right, one needs to iterate with many people and often in public forums. We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process. We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity."
Charlie Babcock
100%
0%
Charlie Babcock,
User Rank: Author
3/25/2016 | 2:40:45 PM
There's failures, then there's wished for and staged failures
Jastroff, that's a little harsh. We have no online sincerity or "truthfulness" tests that separate community commenters from exploiters. Yelp can still be gamed by those with the intent to do so. In retrospect, we can say this was an obvious failure. Nevertheless, AI Chatbot was targeted by a set of commenters who wished it to fail.
InformationWeek Is Getting an Upgrade!

Find out more about our plans to improve the look, functionality, and performance of the InformationWeek site in the coming months.

Slideshows
10 Things Your Artificial Intelligence Initiative Needs to Succeed
Lisa Morgan, Freelance Writer,  4/20/2021
News
Tech Spending Climbs as Digital Business Initiatives Grow
Jessica Davis, Senior Editor, Enterprise Apps,  4/22/2021
Commentary
Optimizing the CIO and CFO Relationship
Mary E. Shacklett, Technology commentator and President of Transworld Data,  4/13/2021
White Papers
Register for InformationWeek Newsletters
Video
Current Issue
Planning Your Digital Transformation Roadmap
Download this report to learn about the latest technologies and best practices or ensuring a successful transition from outdated business transformation tactics.
Slideshows
Flash Poll