Abstract
When inferring emotions, humans rely on a number of cues, including not only facial expressions, body posture, but also expressor-external, contextual information. The goal of the present study was to compare the impact of such contextual information on emotion processing in humans and two deep neural network (DNN) models. We used results from a human experiment in which two types of pictures were rated for valence and arousal: the first type depicted people expressing an emotion in a social context including other people; the second was a context-reduced version in which all information except for the target expressor was blurred out. The resulting human ratings of valence and arousal were systematically decreased in the context-reduced version, highlighting the importance of context. We then compared human ratings with those of two DNN models (one trained on face images only, and the other trained also on contextual information). Analyses of both categorical and the valence/arousal ratings showed that although there were some superficial similarities, both models failed to capture human rating patterns both in context-rich and context-reduced conditions. Our study emphasizes the importance of a more holistic, multi-modal training regime with richer human data to build better emotion-understanding systems in the area of affective computing.
Original language | English |
---|---|
Title of host publication | ICMI 2022 - Companion Publication of the 2022 International Conference on Multimodal Interaction |
Publisher | Association for Computing Machinery |
Pages | 127-133 |
Number of pages | 7 |
ISBN (Electronic) | 9781450393898 |
DOIs | |
Publication status | Published - 2022 Nov 7 |
Event | 24th ACM International Conference on Multimodal Interaction, ICMI 2022 - Bangalore, India Duration: 2022 Nov 7 → 2022 Nov 11 |
Publication series
Name | ACM International Conference Proceeding Series |
---|
Conference
Conference | 24th ACM International Conference on Multimodal Interaction, ICMI 2022 |
---|---|
Country/Territory | India |
City | Bangalore |
Period | 22/11/7 → 22/11/11 |
Bibliographical note
Funding Information:This study was supported by the National Research Foundation of Korea under project BK21 FOUR, grants NRF-2017M3C7A1041824, NRF-2019R1A2C2007612, as well as by three Institute of Information & Communications Technology Planning & Evaluation (IITP) grants funded by the Korea government (2017-0-00451, Development of BCI based Brain and Cognitive Computing Technology for Recognizing User’s Intentions using Deep Learning; 2019-0-00079, Department of Artifcial Intelligence, Korea University; 2021-0-02068, Artifcial Intelligence Inovation Hub).
Publisher Copyright:
© 2022 ACM.
Keywords
- DNNs
- affective computing
- contextual information
- emotion recognition
- user study
ASJC Scopus subject areas
- Software
- Human-Computer Interaction
- Computer Vision and Pattern Recognition
- Computer Networks and Communications