Inter-domain curriculum learning for domain generalization

Daehee Kim, Jinkyu Kim, Jaekoo Lee

Research output: Contribution to journalArticlepeer-review


Domain generalization aims to learn a domain-invariant representation from multiple source domains so that a model can generalize well across unseen target domains. Such models are often trained with examples that are presented randomly from all source domains, which can make the training unstable due to optimization in conflicting gradient directions. Here, we explore inter-domain curriculum learning (IDCL) where source domains are exposed in a meaningful order to gradually provide more complex ones. The experiments show that significant improvements can be achieved in both PACS and Office–Home benchmarks, and ours improves the state-of-the-art method by 1.08%.

Original languageEnglish
Pages (from-to)225-229
Number of pages5
JournalICT Express
Issue number2
Publication statusPublished - 2022 Jun

Bibliographical note

Funding Information:
This research was supported by the MSIT (Ministry of Science and ICT), Korea, under the ICT Challenge and Advanced Network of HRD program ( 2020-0-01826 ) and Sustainable & Robust Autonomous Driving AI Education/Development Integrated Platform ( 2021-0-00994 ) supervised by the IITP (Institute of Information & Communications Technology Planning & Evaluation).

Publisher Copyright:
© 2021 The Author(s)


  • Deep neural networks
  • Domain generalization
  • Inter-domain curriculum learning

ASJC Scopus subject areas

  • Software
  • Information Systems
  • Hardware and Architecture
  • Computer Networks and Communications
  • Artificial Intelligence


Dive into the research topics of 'Inter-domain curriculum learning for domain generalization'. Together they form a unique fingerprint.

Cite this