Human localization based on the fusion of vision and sound system

Sung Wan Kim, Ji Yong Lee, Doik Kim, Bum Jae You, Nakju Lett Doh

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Citations (Scopus)

Abstract

In this paper, a method for accurate human localization using a sequential fusion of sound and vision is proposed. Although the sound localization alone works well in most cases, there are situations such as noisy environment and small inter-microphone distance, which may produce wrong or poor results. A vision system also has deficiency, such as limited visual field. To solve these problems we propose a method that combines sound localization and vision in real time. Particularly, a robot finds rough location of the speaker via sound source localization, and then using vision to increase the accuracy of the location. Experimental results show that the proposed method is more accurate and reliable than the results of pure sound localization.

Original languageEnglish
Title of host publicationURAI 2011 - 2011 8th International Conference on Ubiquitous Robots and Ambient Intelligence
Pages495-498
Number of pages4
DOIs
Publication statusPublished - 2011
Event2011 8th International Conference on Ubiquitous Robots and Ambient Intelligence, URAI 2011 - Incheon, Korea, Republic of
Duration: 2011 Nov 232011 Nov 26

Publication series

NameURAI 2011 - 2011 8th International Conference on Ubiquitous Robots and Ambient Intelligence

Other

Other2011 8th International Conference on Ubiquitous Robots and Ambient Intelligence, URAI 2011
Country/TerritoryKorea, Republic of
CityIncheon
Period11/11/2311/11/26

Keywords

  • Face Detection
  • Fusion
  • Human Localization
  • Sound Localization

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Human localization based on the fusion of vision and sound system'. Together they form a unique fingerprint.

Cite this