TY - GEN
T1 - Multi-atlas based segmentation editing with interaction-guided constraints
AU - Park, Sang Hyun
AU - Gao, Yaozong
AU - Shen, Dinggang
N1 - Publisher Copyright:
© Springer International Publishing Switzerland 2015.
PY - 2015
Y1 - 2015
N2 - We propose a novel multi-atlas based segmentation method to address the editing scenario, when given an incomplete segmentation along with a set of training label images. Unlike previous multi-atlas based methods, which depend solely on appearance features, we incorporate interaction-guided constraints to find appropriate training labels and derive their voting weights. Specifically, we divide user interactions, provided on erroneous parts, into multiple local interaction combinations, and then locally search for the training label patches well-matched with each interaction combination and also the previous segmentation. Then, we estimate the new segmentation through the label fusion of selected label patches that have their weights defined with respect to their respective distances to the interactions. Since the label patches are found to be from different combinations in our method, various shape changes can be considered even with limited training labels and few user interactions. Since our method does not need image information or expensive learning steps, it can be conveniently used for most editing problems. To demonstrate the positive performance, we apply our method to editing the segmentation of three challenging data sets: prostate CT, brainstem CT, and hippocampus MR. The results show that our method outperforms the existing editing methods in all three data sets.
AB - We propose a novel multi-atlas based segmentation method to address the editing scenario, when given an incomplete segmentation along with a set of training label images. Unlike previous multi-atlas based methods, which depend solely on appearance features, we incorporate interaction-guided constraints to find appropriate training labels and derive their voting weights. Specifically, we divide user interactions, provided on erroneous parts, into multiple local interaction combinations, and then locally search for the training label patches well-matched with each interaction combination and also the previous segmentation. Then, we estimate the new segmentation through the label fusion of selected label patches that have their weights defined with respect to their respective distances to the interactions. Since the label patches are found to be from different combinations in our method, various shape changes can be considered even with limited training labels and few user interactions. Since our method does not need image information or expensive learning steps, it can be conveniently used for most editing problems. To demonstrate the positive performance, we apply our method to editing the segmentation of three challenging data sets: prostate CT, brainstem CT, and hippocampus MR. The results show that our method outperforms the existing editing methods in all three data sets.
UR - http://www.scopus.com/inward/record.url?scp=84951834251&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-24574-4_24
DO - 10.1007/978-3-319-24574-4_24
M3 - Conference contribution
AN - SCOPUS:84951834251
SN - 9783319245737
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 198
EP - 206
BT - Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015 - 18th International Conference, Proceedings
A2 - Frangi, Alejandro F.
A2 - Navab, Nassir
A2 - Hornegger, Joachim
A2 - Wells, William M.
PB - Springer Verlag
T2 - 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2015
Y2 - 5 October 2015 through 9 October 2015
ER -