Abstract
Video-based gaze-tracking systems are typically restricted in terms of their effective tracking space. This constraint limits the use of eyetrackers in studying mobile human behavior. Here, we compare two possible approaches for estimating the gaze of participants who are free to walk in a large space whilst looking at different regions of a large display. Geometrically, we linearly combined eye-in-head rotations and head-in-world coordinates to derive a gaze vector and its intersection with a planar display, by relying on the use of a head-mounted eyetrackerand body-motion tracker. Alternatively, we employed Gaussian process regression to estimate the gaze intersection directly from the input data itself. Our evaluation of both methods indicates that a regression approach can deliver comparable results to a geometric approach. The regression approach is favored, given that it has the potential for further optimization, provides confidence bounds for its gaze estimates and offers greater flexibility in its implementation. Open-source software for the methods reported here is also provided for user implementation.
Original language | English |
---|---|
Article number | 200 |
Journal | Frontiers in Human Neuroscience |
Volume | 8 |
Issue number | 1 APR |
DOIs | |
Publication status | Published - 2014 Apr 8 |
Keywords
- Active vision
- Calibration method
- Eye movement
- Eye tracking
- Gaussian processes
- Gaze measurement
ASJC Scopus subject areas
- Neuropsychology and Physiological Psychology
- Neurology
- Psychiatry and Mental health
- Biological Psychiatry
- Behavioral Neuroscience