Generalized Outlier Exposure: Towards a trustworthy out-of-distribution detector without sacrificing accuracy

Jiin Koo, Sungjoon Choi, Sangheum Hwang

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Despite the remarkable performance of deep neural networks (DNNs), it is often challenging to employ DNNs in safety-critical applications due to their overconfident predictions on even out-of-distribution (OoD) samples. To address this, an OoD detection task was motivated, and one of the OoD detection methods, Outlier Exposure (OE), demonstrated strong performance by leveraging OoD training samples. However, OE and its variants lead to a deterioration in in-distribution (ID) classification performance, and this issue is still unresolved. To this end, we propose Generalized OE (G-OE) that linearly mixes training data from all given samples, including OoD to produce reliable uncertainty estimates. G-OE also includes an effective filtering strategy to reduce the negative effect of OoD samples that are semantically similar to ID samples. We extensively evaluate the performance of G-OE on SC-OoD benchmarks: G-OE improves the performance of OoD detection and ID classification compared to existing OE-based methods.

Original languageEnglish
Article number127371
JournalNeurocomputing
Volume577
DOIs
Publication statusPublished - 2024 Apr 7

Bibliographical note

Publisher Copyright:
© 2024 Elsevier B.V.

Keywords

  • Confidence
  • Out-of-distribution detection
  • Outlier Exposure
  • Uncertainty

ASJC Scopus subject areas

  • Computer Science Applications
  • Cognitive Neuroscience
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Generalized Outlier Exposure: Towards a trustworthy out-of-distribution detector without sacrificing accuracy'. Together they form a unique fingerprint.

Cite this