TY - GEN
T1 - Joint learning of appearance and transformation for predicting brain MR image registration
AU - Wang, Qian
AU - Kim, Minjeong
AU - Wu, Guorong
AU - Shen, Dinggang
PY - 2013
Y1 - 2013
N2 - We propose a new approach to register the subject image with the template by leveraging a set of training images that are pre-aligned to the template. We argue that, if voxels in the subject and the training images share similar local appearances and transformations, they may have common correspondence in the template. In this way, we learn the sparse representation of certain subject voxel to reveal several similar candidate voxels in the training images. Each selected training candidate can bridge the correspondence from the subject voxel to the template space, thus predicting the transformation associated with the subject voxel at the confidence level that relates to the learned sparse coefficient. Following this strategy, we first predict transformations at selected key points, and retain multiple predictions on each key point (instead of allowing a single correspondence only). Then, by utilizing all key points and their predictions with varying confidences, we adaptively reconstruct the dense transformation field that warps the subject to the template. For robustness and computation speed, we embed the prediction-reconstruction protocol above into a multi-resolution hierarchy. In the final, we efficiently refine our estimated transformation field via existing registration method. We apply our method to registering brain MR images, and conclude that the proposed method is competent to improve registration performances in terms of time cost as well as accuracy.
AB - We propose a new approach to register the subject image with the template by leveraging a set of training images that are pre-aligned to the template. We argue that, if voxels in the subject and the training images share similar local appearances and transformations, they may have common correspondence in the template. In this way, we learn the sparse representation of certain subject voxel to reveal several similar candidate voxels in the training images. Each selected training candidate can bridge the correspondence from the subject voxel to the template space, thus predicting the transformation associated with the subject voxel at the confidence level that relates to the learned sparse coefficient. Following this strategy, we first predict transformations at selected key points, and retain multiple predictions on each key point (instead of allowing a single correspondence only). Then, by utilizing all key points and their predictions with varying confidences, we adaptively reconstruct the dense transformation field that warps the subject to the template. For robustness and computation speed, we embed the prediction-reconstruction protocol above into a multi-resolution hierarchy. In the final, we efficiently refine our estimated transformation field via existing registration method. We apply our method to registering brain MR images, and conclude that the proposed method is competent to improve registration performances in terms of time cost as well as accuracy.
UR - http://www.scopus.com/inward/record.url?scp=84901270559&partnerID=8YFLogxK
U2 - 10.1007/978-3-642-38868-2_42
DO - 10.1007/978-3-642-38868-2_42
M3 - Conference contribution
C2 - 24683994
AN - SCOPUS:84901270559
SN - 9783642388675
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 499
EP - 510
BT - Information Processing in Medical Imaging - 23rd International Conference, IPMI 2013, Proceedings
T2 - 23rd International Conference on Information Processing in Medical Imaging, IPMI 2013
Y2 - 28 June 2013 through 3 July 2013
ER -