Abstract
We present a purely vision-based scheme for learning a topological representation of an open environment. The system represents selected places by local views of the surrounding scene, and finds traversable paths between them. The set of recorded views and their connections are combined into a graph model of the environment. To navigate between views connected in the graph, we employ a homing strategy inspired by findings of insect ethology. In robot experiments, we demonstrate that complex visual exploration and navigation tasks can thus be performed without using metric information.
Original language | English |
---|---|
Pages (from-to) | 111-125 |
Number of pages | 15 |
Journal | Autonomous Robots |
Volume | 5 |
Issue number | 1 |
DOIs | |
Publication status | Published - 1998 |
Bibliographical note
Funding Information:The present work has profited from discussions and technical support by Philipp Georg, Susanne Huber, and Titus Neumann. We thank Fiona Newell and our reviewers for helpful comments on the manuscript. Financial support was provided by the Max-Planck-Gesellschaft and the Studienstiftung des deutschen Volkes.
Keywords
- Cognitive maps
- Environment modeling
- Exploration
- Mobile robots
- Omnidirectional sensor
- Topological maps
- Visual navigation
ASJC Scopus subject areas
- Artificial Intelligence