Mixup Mask Adaptation: Bridging the gap between input saliency and representations via attention mechanism in feature mixup

Minsoo Kang, Minkoo Kang, Seong Whan Lee, Suhyun Kim

    Research output: Contribution to journalArticlepeer-review

    Abstract

    The inherent complexity and extensive architecture of deep neural networks often lead to overfitting, compromising their ability to generalize to new, unseen data. One of the regularization techniques, data augmentation, is now considered vital to alleviate this, and mixup, which blends pairs of images and labels, has proven effective in enhancing model generalization. Recently, incorporating saliency in mixups has shown performance gains by retaining salient regions in mixed results. While these methods have become mainstream at the input level, their applications at the feature level remain under-explored. Our observations indicate that outcomes from naive applications of input saliency-based methods did not consistently lead to enhancements in performance. In this paper, we attribute these observations primarily to two challenges: ‘Hard Boundary Issue’ and ‘Saliency Mismatch.’ The Hard Boundary Issue describes a situation where masks with distinct, sharp edges work well at the input level, but lead to unintended distortions in the deeper layers. The Saliency Mismatch points to the disparity between saliency masks generated from input images and the saliency of feature maps. To tackle these challenges, we present a novel method called ‘attention-based mixup mask adaptation’ (MMA). This approach employs an attention mechanism to effectively adapt mixup masks, which are designed to maximize saliency at the input level, for feature augmentation purposes. We reduce the Saliency Mismatch problem by incorporating the spatial significance of the feature map into the mixup mask. Additionally, we address the Hard Boundary Issue by applying softmax to smoothen the adjusted mixup mask. Through comprehensive experiments, we validate our observations and confirm the effectiveness of applying MMA to saliency-aware mixup approaches at the feature level, as evidenced by the performance improvements on multiple benchmarks and the robustness improvements against corruption and deformation.

    Original languageEnglish
    Article number105013
    JournalImage and Vision Computing
    Volume146
    DOIs
    Publication statusPublished - 2024 Jun

    Bibliographical note

    Publisher Copyright:
    © 2024 The Authors

    Keywords

    • Data augmentation
    • Mixup
    • Regularization

    ASJC Scopus subject areas

    • Signal Processing
    • Computer Vision and Pattern Recognition

    Fingerprint

    Dive into the research topics of 'Mixup Mask Adaptation: Bridging the gap between input saliency and representations via attention mechanism in feature mixup'. Together they form a unique fingerprint.

    Cite this