Emojis Train AI to Recognize SarcasmEmojis Train AI to Recognize Sarcasm
It's sometimes difficult for humans to understand tone and sarcasm in text messages, so consider how difficult it is to train computers to understand it. But a new project is employing emojis to help AI understand when humans are employing irony.
September 13, 2017
"I'm being sarcastic." We've all had at least one exchange in which we either had to explain or had someone else explain that what was said was not intended to be taken straight. Generally, you need to know something about both the context and the speaker to grasp when to take a statement at face value or interpret it as sarcastic.
That's why it's particularly challenging to get handle on intent when attempting sentiment analytics on social media. For artificial intelligence to truly understand what humans mean, it needs emotional intelligence, as well. Iyad Rahwan, an associate professor the MIT Media lab and one of his students, who developed the algorithm with one of, Bjarke Felbo worked on just that.
The results are what they call Deep Moji. Described as "artificial emotional intelligence," Deep Moji was trained on millions of emojis "to understand emotions and sarcasm." Rahwan explained to MIT's Technology Review that in the context of online communication emojis take on the function of body language or tone in offering nonverbal cues for meaning.
The amount of data that went into the training was massive. They started with 55 billion tweets, which they narrowed down to 1.2 billion that featured one or more emojis from a list of 64 common ones.
The first part of the training was getting the system to predict "which emoji would be used with a particular message, depending on whether it was happy, sad, humorous, and so on," Technology Review reports. The sarcasm recognition was built on "an existing data set of labeled examples." The emoji training made the system more accurate at identifying sarcasm than algorithms that had not gone through the same.
The researchers put DeepMoji to the test, not just against algorithms but against "several benchmarks for sensing sentiment and emotion in text." They then tested it against humans, and it did exceptionally well. "It was 82 percent accurate at identifying sarcasm correctly, compared with an average score of 76 percent for the human volunteers."
It is rather surprising that it would outperform humans as one would consider the average person would still be more fluent in sarcasm than AI. It would be enlightening to learn how many people were involved and if their background or native language may have been a factor.
The site is meant to be interactive, and people who visit are encouraged to put in their sentences and label them. The video about it, that you can see below, ends with a call to action that people visit the site "to play with phrases and help turn words into emotion."
The additional input helps the system advance its learning and understanding of expressed sentiment. Visitors not only can enter tweet-like statement to see assigned emojis but can put in their own notes on the emotions behind them. Rahwan told Technology Review that self-identifying in that way is actually more accurate than having volunteers label other people's posts with the emotion they think is intended. Those fail to "'capture what psychologists would consider true sentiment,'" he insists.
Having come to read emojis, the system also generates them for the text put into it. I tested out some of the canned phrases already on the site and one that I typed it. I noted that the confidence level varies a great deal, from low to high. For the first two sentences I put in, the confidence level was high, as you can see here:
But I wanted to come up with something that shows some of the range and so then typed in one that only shows low confidence:
While seeing the emojis linked with statements may appear to just be a sort of modern day parlor trick, the purpose behind this emotional understanding is a serious one. The goal is to help combat hate speech. In fact, the researcher's original intent was to create something that would identify racist tweets. But the system needed to learn emotional context and sarcasm to accurately read tweets.
While the goal to improve the civility online is a noble one, improving machine-human communication is also helpful as an end in itself. With the increasingly popularity of IoT and voice-activated technology, we will have more and more people talking to their machines, and they will expect to be understood without extra explanations. To make that work, the emotional component of language has to be mastered by the machine.
About the Author(s)
You May Also Like