Scalable High Performance Image Registration Framework by Unsupervised Deep Feature Representations Learning

Shaoyu Wang, Minjeong Kim, Guorong Wu, Dinggang Shen

Research output: Chapter in Book/Report/Conference proceedingChapter

17 Citations (Scopus)


The emergence of modern imaging techniques, such as MRI and PET, offers the opportunity to study the human brain in ways that previously were impossible. As an effective measurement, imaging-based analysis has been increasingly employed for many research and clinical studies, such as brain development/aging and effect of pharmacological interventions. Thus, it brings forth the need for sophisticated and highly automated image analysis methods to identify and quantify anatomical changes, which are often confounded by complex morphological patterns and inter-individual variations in structure and function. Image registration, as an important image measurement tool, has attracted numerous scientific interests, since it is the key step in dealing with inter-subject variability.However, accurate anatomical correspondence detection still remains the key problem in image registration. Since the use of only intensity is not sufficient to derive meaningful correspondence due to the lack of discrimination power, many feature-based image registration methods have been proposed by using image features on each point as the morphological signature to guide image registration. However, current feature representations have several limitations. First, most image features are handcrafted, which requires intensive dedicated efforts. Second, the designed image features are often problem-specific and hardly reusable, i.e., not guaranteed to work for all types of image. For example, the image features designed for 1.5-T T1-weighted brain MR images are not applicable to 7.0-T MR images, not to mention to other modalities or different organs. Third, although current state-of-the-art methods use supervised learning to find the most relevant and essential features, they require a significant amount of manually labeled training data, while the learned features may be superficial and may misrepresent the complexity of anatomical structures. More critically, the learning procedure is often confined to the particular template domain, with a certain number of pre-designed features. Therefore, once template or image features change, the entire training process has to start over again.Our goal is to seek a general feature representation framework that (i) is able to sufficiently capture the intrinsic characteristics of anatomical structures for accurate correspondence detection, and (ii) can be flexibly applied to the registration on different kinds of neuroimages. Since medical images are in high dimension and generally lack of large sets of manually delineated correspondences, it is important to design an automatic method to learn latent feature representations in a hierarchical and unsupervised manner, i.e., to infer both high-level and low-level features directly from the observed data via a deep learning network. Specifically, we aim to explore deep learning to improve the current state-of-the-art registration methods by providing more powerful feature representations. The flexibility of deep learning architecture also allows us to further extend our deep learning network to various challenging image registrations, where the current registration methods are far from being well developed to catch up with increasing demands in modern imaging-based studies.

Original languageEnglish
Title of host publicationDeep Learning for Medical Image Analysis
PublisherElsevier Inc.
Number of pages25
ISBN (Electronic)9780128104095
ISBN (Print)9780128104088
Publication statusPublished - 2017 Jan 30

Bibliographical note

Publisher Copyright:
© 2017 Elsevier Inc. All rights reserved.


  • Deep learning
  • Deformable image registration
  • Hierarchical feature representation

ASJC Scopus subject areas

  • General Engineering


Dive into the research topics of 'Scalable High Performance Image Registration Framework by Unsupervised Deep Feature Representations Learning'. Together they form a unique fingerprint.

Cite this