Abstract
Artificial Intelligence (AI) explainability plays a crucial role in fostering robust Human-AI Interaction (HAI). However, circular reasoning compromises decision robustness due to limitations in existing AI explainability methods. To address this challenge, we propose leveraging human cognition to enhance explainability, aligning with analysis goals without relying on potentially biased labels. By developing text highlighting driven by human gaze patterns, our research demonstrates that human gaze-based text highlighting significantly reduces decision time for proficient readers, without significantly affecting accuracy or bias. This study concludes by emphasizing the value of human cognition-based explainability in advancing explainable AI (XAI) and HAI.
| Original language | English |
|---|---|
| Journal | Proceedings of the International Florida Artificial Intelligence Research Society Conference, FLAIRS |
| Volume | 37 |
| DOIs | |
| Publication status | Published - 2024 May 12 |
| Event | 37th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2024 - Miramar Beach, United States Duration: 2024 May 19 → 2024 May 21 |
Bibliographical note
Publisher Copyright:Copyright © 2024 by the authors.
ASJC Scopus subject areas
- Artificial Intelligence
- Software
Fingerprint
Dive into the research topics of 'Human Cognition for Mitigating the Paradox of AI Explainability: A Pilot Study on Human Gaze-based Text Highlighting'. Together they form a unique fingerprint.Cite this
- APA
- Standard
- Harvard
- Vancouver
- Author
- BIBTEX
- RIS