Abstract
Despite the remarkable performance of deep neural networks (DNNs), it is often challenging to employ DNNs in safety-critical applications due to their overconfident predictions on even out-of-distribution (OoD) samples. To address this, an OoD detection task was motivated, and one of the OoD detection methods, Outlier Exposure (OE), demonstrated strong performance by leveraging OoD training samples. However, OE and its variants lead to a deterioration in in-distribution (ID) classification performance, and this issue is still unresolved. To this end, we propose Generalized OE (G-OE) that linearly mixes training data from all given samples, including OoD to produce reliable uncertainty estimates. G-OE also includes an effective filtering strategy to reduce the negative effect of OoD samples that are semantically similar to ID samples. We extensively evaluate the performance of G-OE on SC-OoD benchmarks: G-OE improves the performance of OoD detection and ID classification compared to existing OE-based methods.
Original language | English |
---|---|
Article number | 127371 |
Journal | Neurocomputing |
Volume | 577 |
DOIs | |
Publication status | Published - 2024 Apr 7 |
Bibliographical note
Publisher Copyright:© 2024 Elsevier B.V.
Keywords
- Confidence
- Out-of-distribution detection
- Outlier Exposure
- Uncertainty
ASJC Scopus subject areas
- Computer Science Applications
- Cognitive Neuroscience
- Artificial Intelligence