Real-time human-robot interaction based on continuous gesture spotting and recognition

Hueng Il Suk, Seong Sik Cho, Hee Deok Yang, Myung Cheol Roh, Seong Whan Lee

Research output: Contribution to conferencePaperpeer-review

1 Citation (Scopus)

Abstract

Recently, service robots have begun to emerge in human life space, while traditional robots were used for the purpose of manufacturing, transportation, etc., in the past. The service robots are intelligent robots which can understand human gestures and provide services automatically. In order for natural interactions with the service robot, automatic gesture recognition is required. Especially, vision-based gesture recognition provides intuitive and natural interface without wearing on any special devices. This paper proposed gesture recognition methods of whole body and hand gestures in continuous gestures. There are two major issues. The first one is to estimate whole body components in whole body gesture. The second one is spotting gestures in continuous motion. The proposed human pose estimation method is based on analysis of common body components in which are the least moving and varying body parts. It is designated and used in the pose matching process in a flexible manner. From the exemplar database, the relative variability and tolerance model (in terms of allowable amount of motion) for each limb or body part for a given pose is acquired and the common body components found across the exemplar data for each pose are put to use to find a match to an input target. The proposed method showed excellent results in the CMU MoBo and aerobic sequence data. We also proposed a novel spotting method for designing a threshold model in conditional random field (CRF) that perform an adaptive threshold for distinguishing between meaningful and nonmeaningful gesture by augmenting the CRF with one additional label is proposed. The experiment was achieved on American Sign Language (ASL) which is one of the most complicated gestures. Experiments demonstrate that our system can detect signs from continuous data with an 87.5% spotting, versus 67.2% spotting for CRF without a non-sign label.

Original languageEnglish
Pages120-123
Number of pages4
Publication statusPublished - 2008
Event39th International Symposium on Robotics, ISR 2008 - Seoul, Korea, Republic of
Duration: 2008 Oct 152008 Oct 17

Other

Other39th International Symposium on Robotics, ISR 2008
Country/TerritoryKorea, Republic of
CitySeoul
Period08/10/1508/10/17

ASJC Scopus subject areas

  • Artificial Intelligence
  • Human-Computer Interaction
  • Software

Fingerprint

Dive into the research topics of 'Real-time human-robot interaction based on continuous gesture spotting and recognition'. Together they form a unique fingerprint.

Cite this