Abstract
Ultra-high field 7T MRI scanners, while producing images with exceptional anatomical details, are cost prohibitive and hence highly inaccessible. In this paper, we introduce a novel deep learning network that fuses complementary information from spatial and wavelet domains to synthesize 7T T1-weighted images from their 3T counterparts. Our deep learning network leverages wavelet transformation to facilitate effective multi-scale reconstruction, taking into account both low-frequency tissue contrast and high-frequency anatomical details. Our network utilizes a novel wavelet-based affine transformation (WAT) layer, which modulates feature maps from the spatial domain with information from the wavelet domain. Extensive experimental results demonstrate the capability of the proposed method in synthesizing high-quality 7T images with better tissue contrast and greater details, outperforming state-of-the-art methods.
Original language | English |
---|---|
Article number | 101663 |
Journal | Medical Image Analysis |
Volume | 62 |
DOIs | |
Publication status | Published - 2020 May |
Bibliographical note
Funding Information:This work was supported in part by NIH grant EB006733.
Publisher Copyright:
© 2020
Keywords
- Image synthesis
- Magnetic resonance imaging (MRI)
- Spatial and wavelet domains
ASJC Scopus subject areas
- Radiological and Ultrasound Technology
- Radiology Nuclear Medicine and imaging
- Computer Vision and Pattern Recognition
- Health Informatics
- Computer Graphics and Computer-Aided Design