Abstract
Improving the practical capability of SLAM requires effective sensor fusion to cope with the large uncertainties from the sensors and environment. Fusing ultrasonic and vision sensors possesses advantages of both economical efficiency and complementary cooperation. In particular, it can resolve the false data association and divergence problem of an ultrasonic sensor-only algorithm and overcome both the low frequency of SLAM update caused by the computational burden and the weakness to illumination changes of a vision sensor-only algorithm. In this paper, we propose a VR-SLAM (Vision and Range sensor-SLAM) algorithm to combine ultrasonic sensors and stereo camera very effectively. It consists of two schemes: (1) extracting robust point and line features from sonar data and (2) recognizing planar visual objects using a multi-scale Harris corner detector and its SIFT descriptor from a pre-constructed object database. We show that fusing these schemes through EKF-SLAM frameworks can achieve correct data association via the object recognition and high frequency update via the sonar features. The performance of the proposed algorithm was verified by experiments in various real indoor environments.
Original language | English |
---|---|
Pages (from-to) | 315-335 |
Number of pages | 21 |
Journal | Autonomous Robots |
Volume | 24 |
Issue number | 3 |
DOIs | |
Publication status | Published - 2008 Apr |
Bibliographical note
Funding Information:Acknowledgement This work was supported in part by the IT R&D program of MIC/IITA [2005-S-033-02, Embedded Component Technology and Standardization for URC], by the Korea Science and Engineering Foundation (KOSEF) grant of MOST [No. R0A-2003-000-10308-0], and by the grant of the Korea Health 21 R&D Project, Ministry of Health & Welfare [A020603], Republic of Korea.
Keywords
- Mobile robot
- SLAM
- Sonar feature detection
- Stereo camera
- Ultrasonic sensor
- Visual object recognition
ASJC Scopus subject areas
- Artificial Intelligence