Credit: Andrea Mongia
What did you think of the last commercial you watched? Was it funny? Confusing? Would you buy the product? You might not remember or know for certain how you felt, but increasingly, machines do. New artificial intelligence technologies are learning and recognizing human emotions, and using that knowledge to improve everything from marketing campaigns to health care.
These technologies are referred to as “emotion AI.” Emotion AI is a subset of artificial intelligence (the broad term for machines replicating the way humans think) that measures, understands, simulates, and reacts to human emotions. It’s also known as affective computing, or artificial emotional intelligence. The field dates back to at least 1995, when MIT Media lab professor Rosalind Picard published “Affective Computing.”
Javier Hernandez, a research scientist with the Affective Computing Group at the MIT Media Lab, explains emotion AI as a tool that allows for a much more natural interaction between humans and machines.“Think of the way you interact with other human beings; you look at their faces, you look at their body, and you change your interaction accordingly,” Hernandez said. “How can [a machine] effectively communicate information if it doesn’t know your emotional state, if it doesn’t know how you’re feeling, it doesn’t know how you’re going to respond to specific content?”
While humans might currently have the upper hand on reading emotions, machines are gaining ground using their own strengths. Machines are very good at analyzing large amounts of data, explained MIT Sloan professor Erik Brynjolfsson. They can listen to voice inflections and start to recognize when those inflections correlate with stress or anger. Machines can analyze images and pick up subtleties in micro-expressions on humans’ faces that might happen even too fast for a person to recognize.
“We have a lot of neurons in our brain for social interactions. We’re born with some of those skills, and then we learn more. It makes sense to use technology to connect to our social brains, not just our analytical brains.” Brynjolfsson said. “Just like we can understand speech and machines can communicate in speech, we also understand and communicate with humor and other kinds of emotions. And machines that can speak that language — the language of emotions — are going to have better, more effective interactions with us. It's great that we’ve made some progress; it’s just something that wasn’t an option 20 or 30 years ago, and now it’s on the table.”
Which industries are already using emotion AI?
Advertising — In 2009, Rana el Kaliouby, PhD ’06, and Picard founded Affectiva, an emotion AI company based in Boston, which specializes in automotive AI and advertising research — the latter for 25 percent of the Fortune 500 companies.
“Our technology captures these visceral, subconscious reactions, which we have found correlates very strongly with actual consumer behavior, like sharing the ad or actually buying the product,” el Kaliouby said.
In the case of advertising research, once a client has been vetted and agreed to the terms of Affectiva’s use (like promising not to exploit the technology for surveillance or lie detection) the client is given access to Affectiva’s technology. With a customer’s consent, the technology uses the person’s phone or laptop camera to capture their reactions while watching a particular advertisement.
Self-reporting — like feedback during a test group — is helpful, el Kaliouby said, but getting a moment by moment response allows marketers to really tell if a particular ad resonated with people or was offensive, or if it was confusing or struck a heartstring.
Call centers — Technology from Cogito, a company co-founded in 2007 by MIT Sloan alumni, helps call center agents identify the moods of customers on the phone and adjust how they handle the conversation in real time. Cogito’s voice-analytics software is based on years of human behavior research to identify voice patterns.
Mental health — In December 2018 Cogito launched a spinoff called CompanionMx, and an accompanying mental health monitoring app. The Companion app listens to someone speaking into their phone, and analyzes the speaker’s voice and phone use for signs of anxiety and mood changes.
The app improves users' self-awareness, and can increase coping skills including steps for stress reduction. The company has worked with the Department of Veterans Affairs, the Massachusetts General Hospital, and Brigham & Women’s Hospital in Boston.
Another emotion AI-driven technology for mental health is a wearable device developed at the MIT Media Lab that monitors a person’s heartbeat to tell whether they are experiencing something like stress, pain, or frustration. The monitor then releases a scent to help the wearer adjust to the negative emotion they’re having at that moment.
Media Lab researchers also built an algorithm using phone data and a wearable device, that predicts varying degrees of depression.
Automotive — Hernandez, the Media Lab researcher, is currently working on a team putting emotion AI into vehicles.
While much attention has been paid to safety in the environment outside of a car, inside there a range of distractions that can impact safety. Consider a car that could tell if a driver was arguing with the passenger next to them, based on elevated blood pressure, and adjust the speed of the distracted operator. Or a sensor that signaled the steering wheel to subtly maneuver the car into the middle of the lane, after a sleep-deprived driver unknowingly is listing to the curb.
Affectiva has a similar automotive AI service of its own, which monitors a driver’s state and occupants’ experiences to improve road safety and the occupant experience.
Assistive services — Some people with autism find it challenging to communicate emotionally. That’s where emotion AI can be a sort of “assistive technology,” Hernandez said. Wearable monitors can pick up on subtleties in facial expressions or body language in someone with autism (like an elevated pulse rate) that others might not be able to see.
Hernandez said there are also “communicative prostheses” that help autistic people learn how to read other’s facial expressions. One example is a game in which the person uses the camera on a tablet to identify “smiley” or “frowny” faces on the people around them.
“That is a way for them to engage with other people and also learn how facial expressions work,” Hernandez said, adding that this video technology that measures moods in “smiley” or “frowny” faces could work for customer feedback in crowded theme parks or hospital waiting rooms, or could be used to provide anonymous feedback to upper management in a large office.
Is emotion AI something to welcome or to worry about?
Hernandez recommended any business interested in applying this technology needs to promote a healthy discussion — one that includes its benefits and what is possible with the technology, and how to use it in private ways.
“What I tell companies is think about what aspects of emotional intelligence should play a critical role in your business,” Hernandez said. “If you were to have that emotional interaction, how would that change, can you use technology for that?”
El Kaliouby said she sees potential in expanding the technology to new use cases, for example, using the call center technology to understand the emotional well-being of employees, or for other mental health uses. But concern over coming off as Big Brother is a legitimate worry, and one that will have to be continuously addressed within the scope of privacy and this technology. To that point, el Kaliouby said that Affectiva requires opt-in and consent for all use cases of its technology.
Another thing to keep in mind is that the technology is only as good as its programmer.
Brynjolfsson warned as these technologies are rolled out, they need to be appropriate for all people, not just sensitive to the subset of the population used for training.
“For instance, recognizing emotions in an African American face sometimes can be difficult for a machine that’s trained on Caucasian faces,” Brynjolfsson said. “And some of the gestures or voice inflections in one culture may mean something very different in a different culture.”
Overall, what’s important to remember is that when it’s used thoughtfully, the ultimate benefits of the technology can and should be greater than the cost, Brynjolfsson said, a sentiment echoed by el Kaliouby, who said it is possible to integrate the technology in a thoughtful way.“The paradigm is not human versus machine — it's really machine augmenting human,” she said. “It's human plus machine.”
Ready to go deeper?
- Read MIT Sloan Management Review’s How Emotion Sensing Can Reshape the Workplace.
- Watch Hernandez’s presentation on Bringing Emotional Intelligence to Technology to Combat Stress.
- Read Picard’s 2001 paper Toward Machines With Emotional Intelligence.
- Read Hernandez’s 2018 paper CultureNet: A Deep Learning Approach for Engagement Intensity Estimation from Face Images of Children with Autism.
- Apply for Artificial Intelligence: Implications for Business Strategy, a self-paced MIT Sloan Executive Education course.
Illustration: Andrea Mongia
Read next: 5 steps to 'people-centered' artificial intelligence