In this paper we propose a novel vision-based global localization method based on a hybrid map representation. We employ PCA-SIFT features as visual landmarks and represent the environment with a hybrid map which consists of a global topological map and local metric maps. To localize where a mobile robot is placed, we extract visual features from the currently captured view and match them to the feature database previously constructed according to the hybrid map representation. After filtering noise, we estimate the robot's pose with the qualified matching features by the RANSAC approach. We implemented the proposed method in a real mobile robot and tested in both a home-like room and an office-like corridor. The experimental results show that our vision-based global localization system is acceptable in terms of processing time and accuracy.