Abstract
Medical imaging plays a critical role in various clinical applications. However, due to multiple considerations such as cost and radiation dose, the acquisition of certain image modalities may be limited. Thus, medical image synthesis can be of great benefit by estimating a desired imaging modality without incurring an actual scan. In this paper, we propose a generative adversarial approach to address this challenging problem. Specifically, we train a fully convolutional network (FCN) to generate a target image given a source image. To better model a nonlinear mapping from source to target and to produce more realistic target images, we propose to use the adversarial learning strategy to better model the FCN. Moreover, the FCN is designed to incorporate an image-gradient-difference-based loss function to avoid generating blurry target images. Long-term residual unit is also explored to help the training of the network. We further apply Auto-Context Model to implement a context-aware deep convolutional adversarial network. Experimental results show that our method is accurate and robust for synthesizing target images from the corresponding source images. In particular, we evaluate our method on three datasets, to address the tasks of generating CT from MRI and generating 7T MRI from 3T MRI images. Our method outperforms the state-of-the-art methods under comparison in all datasets and tasks.
Original language | English |
---|---|
Article number | 8310638 |
Pages (from-to) | 2720-2730 |
Number of pages | 11 |
Journal | IEEE Transactions on Biomedical Engineering |
Volume | 65 |
Issue number | 12 |
DOIs | |
Publication status | Published - 2018 Dec |
Keywords
- Adversarial learning
- auto-context model
- deep learning
- image synthesis
- residual learning
ASJC Scopus subject areas
- Biomedical Engineering