Autoencoder and restricted Boltzmann machine for transfer learning in functional magnetic resonance imaging task classification

Jundong Hwang, Niv Lustig, Minyoung Jung, Jong Hwan Lee

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Deep neural networks (DNNs) have been adopted widely as classifiers for functional magnetic resonance imaging (fMRI) data, advancing beyond traditional machine learning models. Consequently, transfer learning of the pre-trained DNN becomes crucial to enhance DNN classification performance, specifically by alleviating an overfitting issue that occurs when a substantial number of DNN parameters are fitted to a relatively small number of fMRI samples. In this study, we first systematically compared the two most popularly used, unsupervised pretraining models for resting-state fMRI (rfMRI) volume data to pre-train the DNNs, namely autoencoder (AE) and restricted Boltzmann machine (RBM). The group in-brain mask used when training AE and RBM displayed a sizable overlap ratio with Yeo's seven functional brain networks (FNs). The parcellated FNs obtained from the RBM were fine-grained compared to those from the AE. The pre-trained AE and RBM served as the weight parameters of the first of the two hidden DNN layers, and the DNN fulfilled the task classifier role for fMRI (tfMRI) data in the Human Connectome Project (HCP). We tested two transfer learning schemes: (1) fixing and (2) fine-tuning the DNN's pre-trained AE or RBM weights. The DNN with transfer learning was compared to a baseline DNN, trained using random initial weights. Overall, DNN classification performance from the transfer learning proved superior when the pre-trained RBM weights were fixed and when the pre-trained AE weights were fine-tuned (average error rates: 14.8% for fixed RBM, 15.1% fine-tuned AE, and 15.5% for the baseline model) compared to the alternative scenarios of DNN transfer learning schemes. Moreover, the optimal transfer learning scheme between the fixed RBM and fine-tuned AE varied according to seven task conditions in the HCP. Nonetheless, the computational load reduced substantially for the fixed-weight-based transfer learning compared to the fine-tuning-based transfer learning (e.g., the number of weight parameters for the fixed-weight-based DNN model reduced to 1.9% compared with a baseline/fine-tuned DNN model). Our findings suggest that weight initialization at the DNN's first layer using RBM-based pre-trained weights provides the most promising approach when the whole-brain fMRI volume supports associated task classification. We believe that our proposed scheme could be applied to a variety of task conditions to improve their classification performance and to utilize computational resources efficiently using our AE/RBM-based pre-trained weights compared to random initial weights for DNN training.

Original languageEnglish
Article numbere18086
JournalHeliyon
Volume9
Issue number7
DOIs
Publication statusPublished - 2023 Jul

Bibliographical note

Funding Information:
This work was supported by a National Research Foundation (NRF) grant, the Ministry of Science and ICT (MSIT) of Korea (NRF-2017R1E1A1A01077288, NRF-2021M3E5D2A01022515), in part by a National Research Council of Science & Technology (NST) grant provided by the Korean government (MSIT) [No. CAP18015-101], and by an Electronics and Telecommunications Research Institute (ETRI) grant funded by the Korean government. [23ZS1100, Core Technology Research for Self-Improving Integrated Artificial Intelligence System].

Funding Information:
This work was supported by a National Research Foundation ( NRF ) grant, the Ministry of Science and ICT ( MSIT ) of Korea (NRF-2017R1E1A1A01077288, NRF-2021M3E5D2A01022515), in part by a National Research Council of Science & Technology ( NST ) grant provided by the Korean government ( MSIT ) [No. CAP18015-101], and by an Electronics and Telecommunications Research Institute ( ETRI ) grant funded by the Korean government. [23ZS1100, Core Technology Research for Self-Improving Integrated Artificial Intelligence System].

Publisher Copyright:
© 2023 The Authors

Keywords

  • Autoencoder
  • Deep neural network
  • Functional magnetic resonance imaging
  • Human connectome project
  • Restricted Boltzmann machine
  • Transfer learning

ASJC Scopus subject areas

  • General

Fingerprint

Dive into the research topics of 'Autoencoder and restricted Boltzmann machine for transfer learning in functional magnetic resonance imaging task classification'. Together they form a unique fingerprint.

Cite this