Do Pre-processing and Augmentation Help Explainability? A Multi-seed Analysis for Brain Age Estimation

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    The performance of predicting biological markers from brain scans has rapidly increased over the past years due to the availability of open datasets and efficient deep learning algorithms. There are two concerns with these algorithms, however: they are black-box models, and they can suffer from over-fitting to the training data due to their high capacity. Explainability for visualizing relevant structures aims to address the first issue, whereas data augmentation and pre-processing are used to avoid overfitting and increase generalization performance. In this context, critical open issues are: (i) how robust explainability is across training setups, (ii) how a higher model performance relates to explainability, and (iii) what effects pre-processing and augmentation have on performance and explainability. Here, we use a dataset of 1,452 scans to investigate the effects of augmentation and pre-processing via brain registration on explainability for the task of brain age estimation. Our multi-seed analysis shows that although both augmentation and registration significantly boost loss performance, highlighted brain structures change substantially across training conditions. Our study highlights the need for a careful consideration of training setups in interpreting deep learning outputs in brain analysis.

    Original languageEnglish
    Title of host publicationInterpretability of Machine Intelligence in Medical Image Computing - 5th International Workshop, iMIMIC 2022, Held in Conjunction with MICCAI 2022, Proceedings
    EditorsMauricio Reyes, Pedro Henriques Abreu, Jaime Cardoso
    PublisherSpringer Science and Business Media Deutschland GmbH
    Pages12-21
    Number of pages10
    ISBN (Print)9783031179754
    DOIs
    Publication statusPublished - 2022
    Event5th International Workshop on Interpretability of Machine Intelligence in Medical Image Computing, iMIMIC 2022, held in conjunction with the 25th International Conference on Medical Imaging and Computer-Assisted Intervention, MICCAI 2022 - Singapore, Singapore
    Duration: 2022 Sept 222022 Sept 22

    Publication series

    NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
    Volume13611 LNCS
    ISSN (Print)0302-9743
    ISSN (Electronic)1611-3349

    Conference

    Conference5th International Workshop on Interpretability of Machine Intelligence in Medical Image Computing, iMIMIC 2022, held in conjunction with the 25th International Conference on Medical Imaging and Computer-Assisted Intervention, MICCAI 2022
    Country/TerritorySingapore
    CitySingapore
    Period22/9/2222/9/22

    Bibliographical note

    Publisher Copyright:
    © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

    Keywords

    • Brain age estimation
    • Deep learning
    • Explainability
    • Guided backpropagation
    • Interpretability

    ASJC Scopus subject areas

    • Theoretical Computer Science
    • General Computer Science

    Fingerprint

    Dive into the research topics of 'Do Pre-processing and Augmentation Help Explainability? A Multi-seed Analysis for Brain Age Estimation'. Together they form a unique fingerprint.

    Cite this