The goal of multi-atlas segmentation is to estimate the anatomical labels on each target image point by combining the labels from a set of registered atlas images via label fusion. Typically, label fusion can be formulated either as a reconstruction or as a classification problem. Reconstruction-based methods compute the target labels as a weighted average of the atlas labels. Such weights are derived from the representation of the target image patches as a linear combination of the atlas image patches. However, the related issue is that the optimal weights in the image domain are not necessarily corresponding to those in the label domain. Classification-based methods can avoid this issue by directly learning the relationship between image and label domains. However, the learned relationships, describing the common characteristics of all the training atlas patches, might not be representative for a particular target image patch, and thus undermine the labeling results. In order to overcome the limitations of both types of methods, we innovatively formulate the patch-based label fusion problem as a matrix completion problem. By doing so, we can jointly utilize (1) the relationships between atlas and target image patches (thus taking the advantage of the reconstruction-based methods), and (2) the relationships between image and label domains (taking the advantage of the classification-based methods). In this way, our generalized paradigm can improve the label fusion accuracy in segmenting the challenging structures, e.g., hippocampus, compared to the state-of-the-art methods.
|Number of pages||8|
|Journal||Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)|
|Publication status||Published - 2014 Jan 1|
ASJC Scopus subject areas
- Computer Science(all)
- Theoretical Computer Science