Semi-supervised learning attempts to use a large set of unlabeled data to increase the prediction accuracy of machine learning models when the amount of labeled data is limited. However, in realistic cases, unlabeled data may worsen performance because they contain out-of-distribution (OOD) data that differ from the labeled data. To address this issue, safe semi-supervised deep learning has recently been presented. This study suggests a new safe semi-supervised algorithm that uses an uncertainty-aware Bayesian neural network. Our proposed method, safe uncertainty-based consistency training (SafeUC), uses Bayesian uncertainty to minimize the harmful effects caused by unlabeled OOD examples. The proposed method improves the model's generalization performance by regularizing the network for consistency against uncertain noise. Moreover, to avoid uncertain prediction results, the proposed method includes a practical inference tip based on a well-calibrated uncertainty. The effectiveness of the proposed method is demonstrated in the experimental results on CIFAR-10 and SVHN by showing that it achieved state-of-the-art performance for all semi-supervised learning tasks with OOD data presence rates.
Bibliographical noteFunding Information:
The authors would like to thank the editor and reviewers for their careful evaluation and helpful recommendations that significantly improved the quality of the paper. This research was supported by Agency for Defense Development (ADD) (No. UI2100062D, Technique Analysis and Model Prototyping for the Capability Evaluation and Weapon Correlation of Friend and Foe) as a part of AI - Command Decision Support for Future Ground Operations (AICDS).
© 2022 Elsevier Inc.
- Bayesian neural network
- Consistency regularization
- Safe semi-supervised deep learning
- Uncertain noise
ASJC Scopus subject areas
- Theoretical Computer Science
- Control and Systems Engineering
- Computer Science Applications
- Information Systems and Management
- Artificial Intelligence