Most traditional classification methods for Alzheimer's disease (AD) aim to obtain a high accuracy, or equivalently a low classification error rate, which implicitly assumes that the losses of all misclassifications are the same. However, in practical AD diagnosis, the losses of misclassifying healthy subjects and AD patients are usually very different. For example, it may be troublesome if a healthy subject is misclassified as AD, but it could result in a more serious consequence if an AD patient is misclassified as healthy subject. In this paper, we propose a multi-stage cost-sensitive approach for AD classification via multimodal imaging data and CSF biomarkers. Our approach contains three key components: (1) a cost-sensitive feature selection which can select more AD-related brain regions by using different costs for different misclassifications in the feature selection stage, (2) a multimodal data fusion which effectively fuses data from MRI, PET and CSF biomarkers based on multiple kernels combination, and (3) a cost-sensitive classifier construction which further reduces the overall misclassification loss through a threshold-moving strategy. Experimental results on ADNI dataset show that the proposed approach can significantly reduce the cost of misclassification and simultaneously improve the sensitivity, under the same or even higher classification accuracy compared with conventional methods.