Towards an Interpretable Deep Driving Network by Attentional Bottleneck

Jinkyu Kim, Mayank Bansal

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

Deep neural networks are a key component of behavior prediction and motion generation for self-driving cars. One of their main drawbacks is a lack of transparency: they should provide easy to interpret rationales for what triggers certain behaviors. We propose an architecture called Attentional Bottleneck with the goal of improving transparency. Our key idea is to combine visual attention, which identifies what aspects of the input the model is using, with an information bottleneck that enables the model to only use aspects of the input which are important. This not only provides sparse and interpretable attention maps (e.g. focusing only on specific vehicles in the scene), but it adds this transparency at no cost to model accuracy. In fact, we find improvements in accuracy when applying Attentional Bottleneck to the ChauffeurNet model, whereas we find that the accuracy deteriorates with a traditional visual attention model.

Original languageEnglish
Article number9483668
Pages (from-to)7349-7356
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume6
Issue number4
DOIs
Publication statusPublished - 2021 Oct

Keywords

  • Explainable AI (XAI)
  • deep driving network

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Biomedical Engineering
  • Human-Computer Interaction
  • Mechanical Engineering
  • Computer Vision and Pattern Recognition
  • Computer Science Applications
  • Control and Optimization
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Towards an Interpretable Deep Driving Network by Attentional Bottleneck'. Together they form a unique fingerprint.

Cite this