Vouch: multimodal touch-and-voice input for smart watches under difficult operating conditions

Jaedong Lee, Changhyeon Lee, Gerard Jounghyun Kim

Research output: Contribution to journalArticlepeer-review

9 Citations (Scopus)

Abstract

We consider a multimodal method for smart-watch text entry, called “Vouch,” which combines touch and voice input. Touch input is familiar and has good ergonomic accessibility, but is limited by the fat-finger problem (or equivalently, the screen size) and is sensitive to user motion. Voice input is mostly immune to slow user motion, but its reliability may suffer from environmental noise. Together, however, such characteristics can complement each other when coping with the difficult smart-watch operating conditions. With Vouch, the user makes an approximate touch among the densely distributed alphabetic keys; the accompanying voice input can be used to effectively disambiguate the target from among possible candidates, if not identify the target outright. We present a prototype implementation of the proposed multimodal input method and compare its performance and usability to the conventional unimodal method. We focus particularly on the potential improvement under difficult operating conditions, such as when the user is in motion. The comparative experiment validates our hypothesis that the Vouch multimodal approach would show more reliable recognition performance and higher usability.

Original languageEnglish
Pages (from-to)289-299
Number of pages11
JournalJournal on Multimodal User Interfaces
Volume11
Issue number3
DOIs
Publication statusPublished - 2017 Sept 1

Keywords

  • Multimodal interaction
  • Smart watch input
  • Touch input
  • Voice input

ASJC Scopus subject areas

  • Signal Processing
  • Human-Computer Interaction

Fingerprint

Dive into the research topics of 'Vouch: multimodal touch-and-voice input for smart watches under difficult operating conditions'. Together they form a unique fingerprint.

Cite this