Recently, patch-based label fusion methods have achieved many successes in medical imaging area. After registering atlas images to the target image, the label at each target image point can be subsequently determined by checking the patchwise similarities between the underlying target image patch and all atlas image patches. Apparently, the definition of patchwise similarity is critical in label fusion. However, current methods often simply use entire image patch with fixed patch size throughout the entire label fusion procedure, which could be insufficient to distinguish complex shape/appearance patterns of anatomical structures in medical imaging scenario. In this paper, we address the above limitations at three folds. First, we assign each image patch with multiscale feature representations such that both local and semi-local image information can be encoded to increase robustness of measuring patchwise similarity in label fusion. Second, since multiple variable neighboring structures could present in one image patch, simply computing patchwise similarity based on the entire image patch is not specific to the particular structure of interest under labeling and can be easily misled by the surrounding variable structures in the same image patch. Thus, we partition each atlas patch into a set of new label-specific atlas patches according to the existing label information in the atlas images. Then, the new label-specific atlas patches can be more specific and flexible for label fusion than using the entire image patch, since the complex image patch has now been semantically divided into several distinct patterns. Finally, in order to correct the possible mis-labeling, we hierarchically improve the label fusion result in a coarse-to-fine manner by iteratively repeating the label fusion procedure with the gradually-reduced patch size. More accurate label fusion results have been achieved by our hierarchical label fusion method with multiscale feature presentations upon label-specific atlas patches.