Robust visual tracking framework in the presence of blurring by arbitrating appearance- and feature-based detection

Tae Koo Kang, Yung Hak Mo, Dong Sung Pae, Choon Ki Ahn, Myo Taeg Lim

Research output: Contribution to journalArticlepeer-review

10 Citations (Scopus)

Abstract

This paper proposes a new visual tracking framework and demonstrates its merits via mobile robot experiments. An image sequence from the vision system of a mobile robot is not static when a mobile robot is moving, since slipping and vibration occur. These problems cause image blurring. Therefore, in this paper, we address the problem of robust object tracking under blurring and introduce a novel robust visual tracking framework based on the arbitration of the AdaBoost-based detection method and the appearance-based detection method to overcome the blurring problem. The proposed framework consists of three parts: (1) distortion error compensation and feature extraction using the Modified Discrete Gaussian–Hermite Moment (MDGHM) and fuzzy-based distortion error compensation, (2) object detection using arbitration of appearance- and feature-based object detection, and (3) object tracking using a Finite Impulse Response (FIR) filter. To demonstrate the performance of the proposed framework, mobile robot visual tracking experiments are carried out. The results show that the proposed framework is more robust against blurring than the conventional feature- and appearance-based methods.

Original languageEnglish
Pages (from-to)50-69
Number of pages20
JournalMeasurement: Journal of the International Measurement Confederation
Volume95
DOIs
Publication statusPublished - 2017 Jan 1

Keywords

  • Finite impulse response tracker
  • Mobile robot
  • Modified discrete Gaussian–Hermite moment
  • Object detection
  • Visual object tracking

ASJC Scopus subject areas

  • Instrumentation
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Robust visual tracking framework in the presence of blurring by arbitrating appearance- and feature-based detection'. Together they form a unique fingerprint.

Cite this