Abstract
In this paper, we focus on the problem of efficiently locating a target object described with free-form text using a mobile robot equipped with vision sensors (e.g., an RGBD camera). Conventional active visual search predefines a set of objects to search for, rendering these techniques restrictive in practice. To provide added flexibility in active visual searching, we propose a system where a user can enter target commands using free-form text; we call this system Zero-shot Active Visual Search (ZAVIS). ZAVIS detects and plans to search for a target object inputted by a user through a semantic grid map represented by static landmarks (e.g., desk or bed). For efficient planning of object search patterns, ZAVIS considers commonsense knowledge-based co-occurrence and predictive uncertainty while deciding which landmarks to visit first. We validate the proposed method with respect to SR (success rate) and SPL (success weighted by path length) in both simulated and real-world environments. The proposed method outperforms previous methods in terms of SPL in simulated scenarios, and we further demonstrate ZAVIS with a Pioneer-3AT robot in real-world studies.
Original language | English |
---|---|
Title of host publication | Proceedings - ICRA 2023 |
Subtitle of host publication | IEEE International Conference on Robotics and Automation |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 2004-2010 |
Number of pages | 7 |
ISBN (Electronic) | 9798350323658 |
DOIs | |
Publication status | Published - 2023 |
Event | 2023 IEEE International Conference on Robotics and Automation, ICRA 2023 - London, United Kingdom Duration: 2023 May 29 → 2023 Jun 2 |
Publication series
Name | Proceedings - IEEE International Conference on Robotics and Automation |
---|---|
Volume | 2023-May |
ISSN (Print) | 1050-4729 |
Conference
Conference | 2023 IEEE International Conference on Robotics and Automation, ICRA 2023 |
---|---|
Country/Territory | United Kingdom |
City | London |
Period | 23/5/29 → 23/6/2 |
Bibliographical note
Funding Information:ACKNOWLEDGMENT This work was supported by Samsung Electronics (IO201230-08278-01), and the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2.220451.01).
Publisher Copyright:
© 2023 IEEE.
ASJC Scopus subject areas
- Software
- Control and Systems Engineering
- Electrical and Electronic Engineering
- Artificial Intelligence