The performance of predicting biological markers from brain scans has rapidly increased over the past years due to the availability of open datasets and efficient deep learning algorithms. There are two concerns with these algorithms, however: they are black-box models, and they can suffer from over-fitting to the training data due to their high capacity. Explainability for visualizing relevant structures aims to address the first issue, whereas data augmentation and pre-processing are used to avoid overfitting and increase generalization performance. In this context, critical open issues are: (i) how robust explainability is across training setups, (ii) how a higher model performance relates to explainability, and (iii) what effects pre-processing and augmentation have on performance and explainability. Here, we use a dataset of 1,452 scans to investigate the effects of augmentation and pre-processing via brain registration on explainability for the task of brain age estimation. Our multi-seed analysis shows that although both augmentation and registration significantly boost loss performance, highlighted brain structures change substantially across training conditions. Our study highlights the need for a careful consideration of training setups in interpreting deep learning outputs in brain analysis.
|Title of host publication||Interpretability of Machine Intelligence in Medical Image Computing - 5th International Workshop, iMIMIC 2022, Held in Conjunction with MICCAI 2022, Proceedings|
|Editors||Mauricio Reyes, Pedro Henriques Abreu, Jaime Cardoso|
|Publisher||Springer Science and Business Media Deutschland GmbH|
|Number of pages||10|
|Publication status||Published - 2022|
|Event||5th International Workshop on Interpretability of Machine Intelligence in Medical Image Computing, iMIMIC 2022, held in conjunction with the 25th International Conference on Medical Imaging and Computer-Assisted Intervention, MICCAI 2022 - Singapore, Singapore|
Duration: 2022 Sept 22 → 2022 Sept 22
|Name||Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)|
|Conference||5th International Workshop on Interpretability of Machine Intelligence in Medical Image Computing, iMIMIC 2022, held in conjunction with the 25th International Conference on Medical Imaging and Computer-Assisted Intervention, MICCAI 2022|
|Period||22/9/22 → 22/9/22|
Bibliographical noteFunding Information:
Acknowledgements. This study was supported by the National Research Foundation of Korea under project BK21 FOUR and grants NRF-2017M3C7A1041824, NRF-2019R1A2C2007612, as well as by Institute of Information & Communications Technology Planning & Evaluation (IITP) grants funded by the Korea government (No. 2017-0-00451, Development of BCI based Brain and Cognitive Computing Technology for Recognizing User’s Intentions using Deep Learning; No. 2019-0-00079, Department of Artificial Intelligence, Korea University; No. 2021-0-02068, Artificial Intelligence Ino-vation Hub).
© 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.
- Brain age estimation
- Deep learning
- Guided backpropagation
ASJC Scopus subject areas
- Theoretical Computer Science
- Computer Science(all)