TY - GEN
T1 - Contextual modulation of affect
T2 - 24th ACM International Conference on Multimodal Interaction, ICMI 2022
AU - Shin, Soomin
AU - Kim, Doo Yon
AU - Wallraven, Christian
N1 - Funding Information:
This study was supported by the National Research Foundation of Korea under project BK21 FOUR, grants NRF-2017M3C7A1041824, NRF-2019R1A2C2007612, as well as by three Institute of Information & Communications Technology Planning & Evaluation (IITP) grants funded by the Korea government (2017-0-00451, Development of BCI based Brain and Cognitive Computing Technology for Recognizing User’s Intentions using Deep Learning; 2019-0-00079, Department of Artifcial Intelligence, Korea University; 2021-0-02068, Artifcial Intelligence Inovation Hub).
Publisher Copyright:
© 2022 ACM.
PY - 2022/11/7
Y1 - 2022/11/7
N2 - When inferring emotions, humans rely on a number of cues, including not only facial expressions, body posture, but also expressor-external, contextual information. The goal of the present study was to compare the impact of such contextual information on emotion processing in humans and two deep neural network (DNN) models. We used results from a human experiment in which two types of pictures were rated for valence and arousal: the first type depicted people expressing an emotion in a social context including other people; the second was a context-reduced version in which all information except for the target expressor was blurred out. The resulting human ratings of valence and arousal were systematically decreased in the context-reduced version, highlighting the importance of context. We then compared human ratings with those of two DNN models (one trained on face images only, and the other trained also on contextual information). Analyses of both categorical and the valence/arousal ratings showed that although there were some superficial similarities, both models failed to capture human rating patterns both in context-rich and context-reduced conditions. Our study emphasizes the importance of a more holistic, multi-modal training regime with richer human data to build better emotion-understanding systems in the area of affective computing.
AB - When inferring emotions, humans rely on a number of cues, including not only facial expressions, body posture, but also expressor-external, contextual information. The goal of the present study was to compare the impact of such contextual information on emotion processing in humans and two deep neural network (DNN) models. We used results from a human experiment in which two types of pictures were rated for valence and arousal: the first type depicted people expressing an emotion in a social context including other people; the second was a context-reduced version in which all information except for the target expressor was blurred out. The resulting human ratings of valence and arousal were systematically decreased in the context-reduced version, highlighting the importance of context. We then compared human ratings with those of two DNN models (one trained on face images only, and the other trained also on contextual information). Analyses of both categorical and the valence/arousal ratings showed that although there were some superficial similarities, both models failed to capture human rating patterns both in context-rich and context-reduced conditions. Our study emphasizes the importance of a more holistic, multi-modal training regime with richer human data to build better emotion-understanding systems in the area of affective computing.
KW - DNNs
KW - affective computing
KW - contextual information
KW - emotion recognition
KW - user study
UR - http://www.scopus.com/inward/record.url?scp=85142376557&partnerID=8YFLogxK
U2 - 10.1145/3536220.3558036
DO - 10.1145/3536220.3558036
M3 - Conference contribution
AN - SCOPUS:85142376557
T3 - ACM International Conference Proceeding Series
SP - 127
EP - 133
BT - ICMI 2022 - Companion Publication of the 2022 International Conference on Multimodal Interaction
PB - Association for Computing Machinery
Y2 - 7 November 2022 through 11 November 2022
ER -