TY - JOUR
T1 - Label-aligned multi-task feature learning for multimodal classification of Alzheimer’s disease and mild cognitive impairment
AU - Zu, Chen
AU - Jie, Biao
AU - Liu, Mingxia
AU - Chen, Songcan
AU - Shen, Dinggang
AU - Zhang, Daoqiang
AU - the Alzheimer’s Disease Neuroimaging Initiative, Alzheimer’s Disease Neuroimaging Initiative
N1 - Funding Information:
This work was supported in part by the National Natural Science Foundation of China (Nos. 61422204, 61473149, 61170151), the Jiangsu Natural Science Foundation for Distinguished Young Scholar (No. BK20130034), the Specialized Research Fund for the Doctoral Program of Higher Education (No. 20123218110009), and the NUAA Fundamental Research Funds (No. NE2013105), and by NIH grants EB006733, EB008374, EB009634, MH100217, AG041721, and AG042599.
Publisher Copyright:
© 2015, Springer Science+Business Media New York.
PY - 2016/12/1
Y1 - 2016/12/1
N2 - Multimodal classification methods using different modalities of imaging and non-imaging data have recently shown great advantages over traditional single-modality-based ones for diagnosis and prognosis of Alzheimer’s disease (AD), as well as its prodromal stage, i.e., mild cognitive impairment (MCI). However, to the best of our knowledge, most existing methods focus on mining the relationship across multiple modalities of the same subjects, while ignoring the potentially useful relationship across different subjects. Accordingly, in this paper, we propose a novel learning method for multimodal classification of AD/MCI, by fully exploring the relationships across both modalities and subjects. Specifically, our proposed method includes two subsequent components, i.e., label-aligned multi-task feature selection and multimodal classification. In the first step, the feature selection learning from multiple modalities are treated as different learning tasks and a group sparsity regularizer is imposed to jointly select a subset of relevant features. Furthermore, to utilize the discriminative information among labeled subjects, a new label-aligned regularization term is added into the objective function of standard multi-task feature selection, where label-alignment means that all multi-modality subjects with the same class labels should be closer in the new feature-reduced space. In the second step, a multi-kernel support vector machine (SVM) is adopted to fuse the selected features from multi-modality data for final classification. To validate our method, we perform experiments on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database using baseline MRI and FDG-PET imaging data. The experimental results demonstrate that our proposed method achieves better classification performance compared with several state-of-the-art methods for multimodal classification of AD/MCI.
AB - Multimodal classification methods using different modalities of imaging and non-imaging data have recently shown great advantages over traditional single-modality-based ones for diagnosis and prognosis of Alzheimer’s disease (AD), as well as its prodromal stage, i.e., mild cognitive impairment (MCI). However, to the best of our knowledge, most existing methods focus on mining the relationship across multiple modalities of the same subjects, while ignoring the potentially useful relationship across different subjects. Accordingly, in this paper, we propose a novel learning method for multimodal classification of AD/MCI, by fully exploring the relationships across both modalities and subjects. Specifically, our proposed method includes two subsequent components, i.e., label-aligned multi-task feature selection and multimodal classification. In the first step, the feature selection learning from multiple modalities are treated as different learning tasks and a group sparsity regularizer is imposed to jointly select a subset of relevant features. Furthermore, to utilize the discriminative information among labeled subjects, a new label-aligned regularization term is added into the objective function of standard multi-task feature selection, where label-alignment means that all multi-modality subjects with the same class labels should be closer in the new feature-reduced space. In the second step, a multi-kernel support vector machine (SVM) is adopted to fuse the selected features from multi-modality data for final classification. To validate our method, we perform experiments on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database using baseline MRI and FDG-PET imaging data. The experimental results demonstrate that our proposed method achieves better classification performance compared with several state-of-the-art methods for multimodal classification of AD/MCI.
KW - Alzheimer’s disease
KW - Feature selection
KW - Label alignment
KW - Mild cognitive impairment
KW - Multi-task learning
KW - Multimodal classification
UR - http://www.scopus.com/inward/record.url?scp=84947081116&partnerID=8YFLogxK
U2 - 10.1007/s11682-015-9480-7
DO - 10.1007/s11682-015-9480-7
M3 - Article
C2 - 26572145
AN - SCOPUS:84947081116
SN - 1931-7557
VL - 10
SP - 1148
EP - 1159
JO - Brain Imaging and Behavior
JF - Brain Imaging and Behavior
IS - 4
ER -