TY - GEN
T1 - Confidence-guided sequential label fusion for multi-atlas based segmentation
AU - Zhang, Daoqiang
AU - Wu, Guorong
AU - Jia, Hongjun
AU - Shen, Dinggang
PY - 2011
Y1 - 2011
N2 - Label fusion is a key step in multi-atlas based segmentation, which combines labels from multiple atlases to make the final decision. However, most of the current label fusion methods consider each voxel equally and independently during label fusion. In our point of view, however, different voxels act different roles in the way that some voxels might have much higher confidence in label determination than others, i.e., because of their better alignment across all registered atlases. In light of this, we propose a sequential label fusion framework for multi-atlas based image segmentation by hierarchically using the voxels with high confidence to guide the labeling procedure of other challenging voxels (whose registration results among deformed atlases are not good enough) to afford more accurate label fusion. Specifically, we first measure the corresponding labeling confidence for each voxel based on the k-nearest-neighbor rule, and then perform label fusion sequentially according to the estimated labeling confidence on each voxel. In particular, for each label fusion process, we use not only the propagated labels from atlases, but also the estimated labels from the neighboring voxels with higher labeling confidence. We demonstrate the advantage of our method by deploying it to the two popular label fusion algorithms, i.e., majority voting and local weighted voting. Experimental results show that our sequential label fusion method can consistently improve the performance of both algorithms in terms of segmentation/labeling accuracy.
AB - Label fusion is a key step in multi-atlas based segmentation, which combines labels from multiple atlases to make the final decision. However, most of the current label fusion methods consider each voxel equally and independently during label fusion. In our point of view, however, different voxels act different roles in the way that some voxels might have much higher confidence in label determination than others, i.e., because of their better alignment across all registered atlases. In light of this, we propose a sequential label fusion framework for multi-atlas based image segmentation by hierarchically using the voxels with high confidence to guide the labeling procedure of other challenging voxels (whose registration results among deformed atlases are not good enough) to afford more accurate label fusion. Specifically, we first measure the corresponding labeling confidence for each voxel based on the k-nearest-neighbor rule, and then perform label fusion sequentially according to the estimated labeling confidence on each voxel. In particular, for each label fusion process, we use not only the propagated labels from atlases, but also the estimated labels from the neighboring voxels with higher labeling confidence. We demonstrate the advantage of our method by deploying it to the two popular label fusion algorithms, i.e., majority voting and local weighted voting. Experimental results show that our sequential label fusion method can consistently improve the performance of both algorithms in terms of segmentation/labeling accuracy.
UR - http://www.scopus.com/inward/record.url?scp=82255180835&partnerID=8YFLogxK
U2 - 10.1007/978-3-642-23626-6_79
DO - 10.1007/978-3-642-23626-6_79
M3 - Conference contribution
C2 - 22003754
AN - SCOPUS:82255180835
SN - 9783642236259
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 643
EP - 650
BT - Medical Image Computing and Computer-Assisted Intervention, MICCAI 2011 - 14th International Conference, Proceedings
T2 - 14th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2011
Y2 - 18 September 2011 through 22 September 2011
ER -