Patch-based label fusion methods have shown great potential in multi-atlas segmentation. It is crucial for patch-based labeling methods to determine appropriate graphs and corresponding weights to better link patches in the input image with those in atlas images. Currently, two independent steps are performed, i.e., first constructing graphs based on the fixed image neighborhood and then computing weights based on the heat kernel for all patches in the neighborhood. In this paper, we first show that many existing label fusion methods can be unified into a graph-based framework, and then propose a novel method for simultaneously deriving both graph adjacency structure and graph weights based on the sparse representation, to perform multi-atlas segmentation. Our motivation is that each patch in the input image can be reconstructed by the sparse linear superposition of patches in the atlas images, and the reconstruction coefficients can be used to deduce both graph structure and weights simultaneously. Experimental results on segmenting brain anatomical structures from magnetic resonance images (MRI) show that our proposed method achieves significant improvements over previous patch-based methods, as well as other conventional label fusion methods.