The Defense Advanced Research Projects Agency, better known as DARPA, recently announced the successful demonstration of an advanced AI-enabled model capable of detecting sarcasm in textual communications.
With domains like Twitter and Facebook increasingly becoming battle spaces for “grey zone” warfare by bad actors, the ability to identify positive, negative, or neutral emotions in online communications has become a significant focus for the defense community.
Yet, one of the significant problems the commercial and defense communities face in accurately detecting social sentiment stems from how humans communicate with each other, which can appear nonsensical to a dispassionate and insentient computer program.
For example, sarcasm, or linguistic expressions meant to communicate the opposite of what is being said, is a type of everyday communication that has proved to be particularly difficult for sentiment analysis tools to understand.
“Sarcasm has been a major hurdle to increasing the accuracy of sentiment analysis, especially on social media, since sarcasm relies heavily on vocal tones, facial expressions, and gestures that cannot be represented in text,” said Dr. Brian Kettler, a program manager in DARPA’s Information Innovation Office (I2O) in a press release. “Recognizing sarcasm in textual online communication is no easy task as none of these cues are readily available.”
Attempting to tackle a computer’s ability to understand sarcasm, researchers from the University of Central Florida working on DARPA’s Computational Simulation of Online Social Behavior (SocialSIM) program developed an AI-enabled program specifically tailored to detect sarcasm.
Taking input data, such as Tweets or online messages, the team of UCF researchers used an interpretable deep learning model to look for crucial sarcasm cues, including ironic connotations or negative emotions. Using recurrent neural networks and attention mechanisms, researchers developed a model that could track dependencies between cue-words to generate a classification score indicating whether or not sarcasm was present.
“Essentially, the researchers’ approach is focused on discovering patterns in the text that indicate sarcasm. It identifies cue-words and their relationship to other words that are representative of sarcastic expressions or statements,” read a statement by Kettler.
Recently, publishing their findings in the scientific journal Entropy, researchers demonstrated their model achieved “nearly perfect sarcasm detection” on five significant datasets from social networking platforms and online media.
In an analysis of cases in which the AI model failed to detect sarcasm, researchers noted the model found it challenging to classify interrogative sentences, which typically end with a question mark. “With no context information, we believe classifying these correctly is a challenging task not only to the deep learning models but also to human annotators,” noted researchers.
“The researchers’ approach is also highly interpretable, making it easier to understand what’s happening under the ‘hood’ of the model,” DARPA pointed out in their press release.
Elaborating on the significance of high interpretability, DARPA explained that “many deep learning models are regarded as ‘black boxes,’ offering few clues to explain their outputs or predictions.” However, in this instance, researchers tailored their sarcasm detection model to ensure elements of the input data crucial for a given task could be easily identified.
Ultimately, the model’s high interpretability and explainability are vital to building trust in AI-enabled systems and allowing for their use across a multitude of applications in an operational environment.
The UCF team’s sarcasm detector was developed as part of DARPA’s overarching SocialSim program. DARPA says the goal of SocialSim is to create “innovative technologies for high-fidelity computational simulation of online social behavior” to provide a “deeper and more quantitative understanding of adversaries’ use of the global information environment.”
“Accurately detecting sarcasm in text is only a small part of developing these simulation capabilities due to the extremely complex and varied linguistic techniques used in human communication,” said Dr. Ketter, program manager for SocialSim and DARPA’s Influence Campaign Awareness and Sensemaking (INCAS) program. “Knowing when sarcasm is being used is valuable for teaching models what human communication looks like, and subsequently simulating the future course of online content.”
Follow and connect with author Tim McMillan on Twitter: @LtTimMcMillan
Don’t forget to follow us on Twitter, Facebook, and Instagram, to weigh in and share your thoughts. You can also get all the latest news and exciting feature content from The Debrief on Flipboard, and Pinterest. And subscribe to The Debrief YouTube Channel to check out all of The Debrief’s exciting original shows: The Official Debrief Podcast with Michael Mataluni– DEBRIEFED: Digging Deeper with Cristina Gomez –Rebelliously Curious with Chrissy Newton