Head movement module in ACT-R for multi-display environment

Hyungseok Oh, Seongsik Jo, Rohae Myung

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    1 Citation (Scopus)

    Abstract

    The ACT-R cognitive architecture only deals with a single display interface, which needs to be expanded and improved so as to describe the real environment that needs more than a single display. Therefore, this paper proposes a method to describe human performance in a multidisplay environment by developing the head module because the behavior of searching the object beyond the preferred visual angle of ± 15° could not be modeled with the visual module in ACTR. The result shows that ACT-R model with the head module should be necessary when performing tasks in a multi-display environment. In addition, a separate ACT-R model was also developed when a different head movement pattern was involved such as a peripheral vision.

    Original languageEnglish
    Title of host publicationProceedings of the Human Factors and Ergonomics Society 55th Annual Meeting, HFES 2011
    Pages1836-1839
    Number of pages4
    DOIs
    Publication statusPublished - 2011
    Event55th Human Factors and Ergonomics Society Annual Meeting, HFES 2011 - Las Vegas, NV, United States
    Duration: 2011 Sept 192011 Sept 23

    Publication series

    NameProceedings of the Human Factors and Ergonomics Society
    ISSN (Print)1071-1813

    Other

    Other55th Human Factors and Ergonomics Society Annual Meeting, HFES 2011
    Country/TerritoryUnited States
    CityLas Vegas, NV
    Period11/9/1911/9/23

    ASJC Scopus subject areas

    • Human Factors and Ergonomics

    Fingerprint

    Dive into the research topics of 'Head movement module in ACT-R for multi-display environment'. Together they form a unique fingerprint.

    Cite this