Abstract
Most localization algorithms are either range-based or vision-based, but the use of only one type of sensor cannot often ensure successful localization. This paper proposes a particle filter-based localization method that combines the range information obtained from a low-cost IR scanner with the SIFT-based visual information obtained from a monocular camera to robustly estimate the robot pose. The rough estimation of the robot pose by the range sensor can be compensated by the visual information given by the camera and the slow visual object recognition can be overcome by the frequent updates of the range information. Although the bandwidths of the two sensors are different, they can be synchronized by using the encoder information of the mobile robot. Therefore, all data from both sensors are used to estimate the robot pose without time delay and the samples used for estimating the robot pose converge faster than those from either range-based or vision-based localization. This paper also suggests a method for evaluating the state of localization based on the normalized probability of a vision sensor model. Various experiments show that the proposed algorithm can reliably estimate the robot pose in various indoor environments and can recover the robot pose upon incorrect localization.
Original language | English |
---|---|
Pages (from-to) | 97-104 |
Number of pages | 8 |
Journal | International Journal of Control, Automation and Systems |
Volume | 7 |
Issue number | 1 |
DOIs | |
Publication status | Published - 2009 Feb |
Keywords
- Mobile robot localization
- Sensor fusion
- Sensor model
- Vision-based navigation
ASJC Scopus subject areas
- Control and Systems Engineering
- Computer Science Applications