Domain generalization aims to learn a domain-invariant representation from multiple source domains so that a model can generalize well across unseen target domains. Such models are often trained with examples that are presented randomly from all source domains, which can make the training unstable due to optimization in conflicting gradient directions. Here, we explore inter-domain curriculum learning (IDCL) where source domains are exposed in a meaningful order to gradually provide more complex ones. The experiments show that significant improvements can be achieved in both PACS and Office–Home benchmarks, and ours improves the state-of-the-art method by 1.08%.
Bibliographical noteFunding Information:
This research was supported by the MSIT (Ministry of Science and ICT), Korea, under the ICT Challenge and Advanced Network of HRD program ( 2020-0-01826 ) and Sustainable & Robust Autonomous Driving AI Education/Development Integrated Platform ( 2021-0-00994 ) supervised by the IITP (Institute of Information & Communications Technology Planning & Evaluation).
© 2021 The Author(s)
- Deep neural networks
- Domain generalization
- Inter-domain curriculum learning
ASJC Scopus subject areas
- Information Systems
- Hardware and Architecture
- Computer Networks and Communications
- Artificial Intelligence