Module of Axis-based Nexus Attention for weakly supervised object localization

Junghyo Sohn, Eunjin Jeon, Wonsik Jung, Eunsong Kang, Heung Il Suk

    Research output: Contribution to journalArticlepeer-review

    Abstract

    Weakly supervised object localization tasks remain challenging to identify and segment an entire object rather than only discriminative parts of the object. To tackle this problem, corruption-based approaches have been devised, which involve the training of non-discriminative regions by corrupting (e.g., erasing) the input images or intermediate feature maps. However, this approach requires an additional hyperparameter, the corrupting threshold, to determine the degree of corruption and can unfavorably disrupt training. It also tends to localize object regions coarsely. In this paper, we propose a novel approach, Module of Axis-based Nexus Attention (MoANA), which helps to adaptively activate less discriminative regions along with the class-discriminative regions without an additional hyperparameter, and elaborately localizes an entire object. Specifically, MoANA consists of three mechanisms (1) triple-view attentions representation, (2) attentions expansion, and (3) features calibration mechanism. Unlike other attention-based methods that train a coarse attention map with the same values across elements in feature maps, MoANA trains fine-grained values in an attention map by assigning different attention values to each element. We validated MoANA by comparing it with various methods. We also analyzed the effect of each component in MoANA and visualized attention maps to provide insights into the calibration.

    Original languageEnglish
    Article number18588
    JournalScientific reports
    Volume13
    Issue number1
    DOIs
    Publication statusPublished - 2023 Dec

    Bibliographical note

    Publisher Copyright:
    © 2023, The Author(s).

    ASJC Scopus subject areas

    • General

    Fingerprint

    Dive into the research topics of 'Module of Axis-based Nexus Attention for weakly supervised object localization'. Together they form a unique fingerprint.

    Cite this