Cognitive model of human visual search with saliency and scene context for real-world images

Yoonhyung Choi, Jinsung Han, Hyungseok Oh, Rohae Myung

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

The previous Adaptive Control of Thought-Rational (ACT-R) cognitive architecture model has limitations in the sense that it cannot accurately predict human visual search for real-world images because scene context which could be as important as saliency is not included. Thus, this study proposed ACT-R cognitive modeling with saliency and scene context in parallel for human visual search. Then, the validation of the model was performed by comparing with eye-tracking experimental data. Results show that the model data was quite well fit with the eye-tracking data. In conclusion, the modeling method proposed in this study should be used, in order to predict actual human visual search using both strategies selectively for real-world image.

Original languageEnglish
Title of host publication2015 International Annual Meeting of the Human Factors and Ergonomics Society, HFES 2015
PublisherHuman Factors and Ergonomics Society Inc.
Pages706-710
Number of pages5
ISBN (Electronic)9780945289470
DOIs
Publication statusPublished - 2015
Event59th International Annual Meeting of the Human Factors and Ergonomics Society, HFES 2015 - Los Angeles, United States
Duration: 2015 Oct 262015 Oct 30

Publication series

NameProceedings of the Human Factors and Ergonomics Society
Volume2015-January
ISSN (Print)1071-1813

Conference

Conference59th International Annual Meeting of the Human Factors and Ergonomics Society, HFES 2015
Country/TerritoryUnited States
CityLos Angeles
Period15/10/2615/10/30

ASJC Scopus subject areas

  • Human Factors and Ergonomics

Fingerprint

Dive into the research topics of 'Cognitive model of human visual search with saliency and scene context for real-world images'. Together they form a unique fingerprint.

Cite this