Dominant orientation patch matching for HMAX

Yan Feng Lu, Hua Zhen Zhang, Tae Koo Kang, Myo Taeg Lim

Research output: Contribution to journalArticlepeer-review

8 Citations (Scopus)

Abstract

The biologically inspired model for object recognition, Hierarchical Model and X (HMAX), has attracted considerable attention in recent years. HMAX is robust (i.e., shift- and scale-invariant), but it is sensitive to rotational deformation, which greatly limits its performance in object recognition. The main reason for this is that HMAX lacks an appropriate directional module against rotational deformation, thereby often leading to mismatch. To address this issue, we propose a novel patch-matching method for HMAX called Dominant Orientation Patch Matching (DOPM), which calculates the dominant orientation of the selected patches and implements patch-to-patch matching. In contrast to patch matching with the whole target image (second layer C1) in the conventional HMAX model, which involves huge amounts of redundant information in the feature representation, the DOPM-based HMAX model (D-HMAX) quantizes the C1 layer to patch sets with better distinctiveness, then realizes patch-to-patch matching based on the dominant orientation. To show the effectiveness of D-HMAX, we apply it to object categorization and conduct experiments on the CalTech101, CalTech05, GRAZ01, and GRAZ02 databases. Our experimental results demonstrate that D-HMAX outperforms conventional HMAX and is comparable to existing architectures that have a similar framework.

Original languageEnglish
Pages (from-to)155-166
Number of pages12
JournalNeurocomputing
Volume193
DOIs
Publication statusPublished - 2016 Jun 12

Keywords

  • Classification
  • Dominant orientation
  • HMAX
  • Matching
  • Object recognition
  • Patch

ASJC Scopus subject areas

  • Computer Science Applications
  • Cognitive Neuroscience
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Dominant orientation patch matching for HMAX'. Together they form a unique fingerprint.

Cite this