As businesses try to embed artificial intelligence everywhere, an unexpected trend is for them to turn to AI to help their many new robots better understand human emotions.
The new PitchBook study on emerging technologies for enterprise Saas predicts that this area, called ‘emotional AI,’ is on the rise.
The reasoning goes something like this: If companies deploy AI assistants to executives and employees, and let AI chatbots become front-line salespeople and customer service reps, then if the AI can’t understand angry ‘What do you mean?’ and the confused ‘What do you mean by that?’ How can it perform well if it can’t understand the difference between an angry ‘What do you mean?’ and a confused ‘What do you mean?’.
Sentiment AI claims to be the more sophisticated brother of sentiment analysis, a pre-AI technique that attempts to extract human emotion from text-based interactions, especially on social media. Sentiment AI can be described as multi-modal, using visual, audio and other input sensors combined with machine learning and psychology to try to detect human emotions during interactions.
Major AI cloud providers offer services that give developers access to emotional AI capabilities, such as the Emotion API for Microsoft Azure Cognitive Services or the Rekognition service for Amazon Web Services. (The latter has been controversial for years.)
PitchBook says that while emotional AI (even when offered as a cloud service) is nothing new, the sudden rise of bots in the workforce makes it more future-proof than ever in the business world.
Derek Hernandez, PitchBook’s senior analyst for emerging technologies, writes in the report, ‘With the proliferation of AI assistants and fully automated human-computer interactions, emotional AI promises to enable more human-like interpretations and responses.’
‘Cameras and microphones are integral to the hardware side of emotional AI. They can be mounted on laptops, mobile phones, or placed separately in a physical space. Additionally, wearable hardware may provide another avenue for emotional AI applications beyond these devices,’ Hernandez told TechCrunch.(So if a customer service chatbot asks for access to a camera, that could be why.)
To that end, a growing number of startups are working to make this happen. These include Uniphore (which has raised a total of $610 million, including $400 million led by the NEA in 2022), as well as MorphCast, Voicesense, Superceed, Siena AI, audEERING, and Opsis, which PitchBook estimates has also raised a small amounts of funding.
Emotional AI is, of course, a very Silicon Valley approach: using technology to solve problems that arise from the combination of technology and humans.
But even if most AI bots will eventually acquire some form of automated empathy, that doesn’t mean the solution will actually work.
In fact, emotional AI last generated buzz in Silicon Valley around 2019, when most of the attention in the AI/machine learning field was still focused on computer vision rather than generative language and art. Researchers challenged that idea. That year, a group of researchers published a review of studies that concluded that human emotions can’t actually be judged by facial movements. In other words, the idea that we can teach an AI to detect human feelings by having it mimic the way other humans feel (reading facial expressions, body language, tone of voice) is somehow wrong.
There is also the possibility that AI regulation, such as the European Union’s Artificial Intelligence Act, which prohibits computerised visual emotion detection systems from being used for specific purposes such as education, could kill the idea. (Some state laws, such as Illinois’ Biometrics and Privacy Act, also prohibit the unauthorised collection of biometric readings.)
All of this gives us a broader picture of the ubiquitous future of AI that Silicon Valley is currently frantically building. Either these AI robots will attempt to understand emotions in order to perform all the other tasks humans want to assign them, such as customer service, sales, and human resources, or they may not excel at any task that actually requires such capabilities. Perhaps we are looking at an office life full of AI bots on a par with Siri circa 2023. Who’s to say which is worse than the robots needed to manage guessing how everyone is feeling in real time during a meeting?