Comparing Facial Expression Recognition in Humans and Machines: Using CAM, GradCAM, and Extremal Perturbation

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

Facial expression recognition (FER) is a topic attracting significant research in both psychology and machine learning with a wide range of applications. Despite a wealth of research on human FER and considerable progress in computational FER made possible by deep neural networks (DNNs), comparatively less work has been done on comparing the degree to which DNNs may be comparable to human performance. In this work, we compared the recognition performance and attention patterns of humans and machines during a two-alternative forced-choice FER task. Human attention was here gathered through click data that progressively uncovered a face, whereas model attention was obtained using three different popular techniques from explainable AI: CAM, GradCAM and Extremal Perturbation. In both cases, performance was gathered as percent correct. For this task, we found that humans outperformed machines quite significantly. In terms of attention patterns, we found that Extremal Perturbation had the best overall fit with the human attention map during the task.

Original languageEnglish
Title of host publicationPattern Recognition - 6th Asian Conference, ACPR 2021, Revised Selected Papers
EditorsChristian Wallraven, Qingshan Liu, Hajime Nagahara
PublisherSpringer Science and Business Media Deutschland GmbH
Pages403-416
Number of pages14
ISBN (Print)9783031023743
DOIs
Publication statusPublished - 2022
Event6th Asian Conference on Pattern Recognition, ACPR 2021 - Virtual, Online
Duration: 2021 Nov 92021 Nov 12

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume13188 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference6th Asian Conference on Pattern Recognition, ACPR 2021
CityVirtual, Online
Period21/11/921/11/12

Bibliographical note

Funding Information:
Acknowledgments. This work was supported by Institute of Information Communications Technology Planning Evaluation (IITP; No. 2019-0-00079, Department of Artificial Intelligence, Korea University) and National Research Foundation of Korea (NRF; NRF-2017M3C7A1041824) grant funded by the Korean government (MSIT).

Publisher Copyright:
© 2022, Springer Nature Switzerland AG.

Keywords

  • AffectNet
  • Facial expression recognition
  • Human-in-the-loop
  • Humans versus machines

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Fingerprint

Dive into the research topics of 'Comparing Facial Expression Recognition in Humans and Machines: Using CAM, GradCAM, and Extremal Perturbation'. Together they form a unique fingerprint.

Cite this