As computers become more intelligent, they can now handle various manual tasks and understand humans better. Today, AI can recognize human emotions and predict mental states based on facial recognition, voice, and text analysis. In fact, machines could already identify pick up emotions better than humans from speech, according to research conducted in the 2000s. While the emotion AI technology continues to advance, its popularity and the market size are also growing rapidly. However, it comes with ethical concerns and criticism about its effectiveness is increasing.
What is affective computing?
Affective computing, also known as emotion AI, automatically recognizes emotions. Here is Wikipedia’s definition below:
Affective computing is the development of systems that can recognize, interpret, process, and simulate human feelings and emotions.
It may seem strange that machines can do something that is so inherently human. However, growing research supports the point that human emotions are recognizable using facial and verbal clues.
Understanding emotions is critical, especially for companies selling complex products. From those who are not working in customer-facing functions such as sales, marketing, or customer service, it may not be clear how affective computing is valuable for businesses. Emotions, guided by the unconscious mind, are likely to be the decision-makers in complex decisions. Furthermore, emotional, gut-based decisions can be better than conscious decisions when it comes to complex decisions.
Can computers really understand emotions?
People express emotions in surprisingly similar ways across cultures, and machines can pick up the visual and verbal clues of emotions.
A large body of research since the 1970s demonstrates that even pre-literate cultures with minimal exposure to literate cultures can identify basic emotional expressions such as anger, happiness, or surprise. There is also new, contradicting evidence supporting the theory that emotions are expressed individually in different ways, and this is an ongoing debate. However, despite the recent challenges, the theory that emotions are expressed in similar ways by different people is still widely accepted.
Machines are already at acceptable levels in identifying emotions from facial expressions. In a 2017 study cited >30 times, researchers achieved a classification accuracy of 73% for seven emotional states with a relatively simple model using the Facial Action Coding System developed by Ekman, one of the pioneers in the field of facial expressions and emotions. However, this was achieved under strictly defined conditions using 3D Microsoft Kinect cameras. Additionally, experiment participants posed to create facial expressions; they did not naturally generate them. Despite these caveats, ~70% is a significant achievement.
Is the interest in emotion AI increasing?
We have observed the last five years’ trend of both affective computing and emotion AI terms to analyze the popularity of the technology. As it has been a long time since this technology was first born, we observe a slightly decreasing popularity in affective computing. However, we observe an increasing trend for emotion AI, and the popularity of the term caught affective computing in 2020, with AI becoming a more popular term.
According to market research conducted after coronavirus outbreak, the global affective computing market size will grow from $29 billion to $140 billion by 2025, at a compound annual growth rate (CAGR) of 37.4% during the forecast period. The increasing market growth is mostly due to the opportunities for measuring customer satisfaction. There are also other use cases like software testing, employee workload arrangement, and candidate emotion analysis during interviews, which are becoming popular. Feel free to read more about emotion AI use cases in our article.
What are the challenges with affective computing?
Limitations in algorithm design and hardware
The accuracy of affective computing is rising with the developments in algorithm designs. With the advances in technology, AI can identify emotions with new sources like blood volume pulse and facial electrography. However, it still requires further improvements in the algorithm design and more advanced hardware to be more widespread in real-life.
Ethical issues about increased surveillance
As more use cases of affective computing emerge, some use cases require video surveillance or social media monitoring to identify human emotions. While the main goal is to understand users’ mental states for better services, some people might not want to be monitored and their voices, images, or social media posts to be analyzed by affective computing software. The advances in emotion AI can bring some controversial ethical issues especially in some use cases like tracking employees during work, job interviews etc. Political campaigns’ use if sentiment analysis already proved deeply unpopular after the 2016 US presidential elections.
Ethical issues about bias
Even if people are ok with estimates about their emotions being used for analytics, the results may have bias which is especially concerning in cases like job interviews. There has been research claiming that other AI techniques (e.g. facial recognition) have significant bias against underrepresented groups.
Challenges regarding results
Emotion AI using facial expressions relies on research about microexpressions by Paul Ekman and others which has recently come under more scrutiny. However, microexpressions are still accepted as valid signals for emotions by most scientists.
You can read more about affective computing in our related articles:
- Affective Computing: In-Depth Guide to Emotion AI
- 24 Affective Computing Applications / Use Cases in 2020
If you have questions about affective computing, don’t hesitate to contact us:
How can we do better?
Your feedback is valuable. We will do our best to improve our work based on it.