Transferability Analysis of Adversarial Examples in CNN-based SAR Image Classification

  • Minjae Kim*
  • , Haksu Han
  • , Gyeongsup Lim
  • , Yehyeong Lee
  • , Sanghun Sim
  • , Junbeom Hur
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Recently, CNN-based Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) systems have received increasing attention for adversarial examples as a cybersecurity threat. SAR images consist of target, shadow, and speckle regions that are different from optical images due to their unique imaging mechanism. Recent studies on adversarial examples for SAR-ATR have developed black-box attacks suitable for real-world environment by focusing perturbation on target-region using surrogate model. However, if an attack is focused only on the target area, the attack may be overfitted to surrogate model, and may become unsuitable for the target model. In this paper, we derived three research questions. RQ1: 'Does an attack focused only on a target show superior performance in transferability?', RQ2: 'Is the transferability also affected by the other areas in SAR images?', and RQ3: 'If physical attacks are feasible not only on the target but also on the shadows, how does transferability change?' Seeking the answers to these research questions, we conducted comparative experiments of attacks focused on both each individual region and their combinatorial regions diversely. For the analysis, we used 8 models (including 1 surrogate model and 7 target models) trained under the MSTAR dataset. In addition, we used four algorithms (FGSM, CW, DeepFool, and PGD) to create adversarial example and SAR-Bake dataset to divide SAR regions. Specifically, we measured the transfer attack success rate when perturbations were applied to each specific region (all pixels, target, shadow, speckle and target+shadow). Furthermore, we utilized Grad-CAM to visualize the impact of these specific regions on model outcomes to interpret the experimental result intuitively. Our findings highlight that targeting only the target area is insufficient and extending perturbations to shadow regions effectively enhances transferability.

Original languageEnglish
Title of host publication39th International Conference on Information Networking, ICOIN 2025
PublisherIEEE Computer Society
Pages563-568
Number of pages6
ISBN (Electronic)9798331506940
DOIs
Publication statusPublished - 2025
Event39th International Conference on Information Networking, ICOIN 2025 - Chiang Mai, Thailand
Duration: 2025 Jan 152025 Jan 17

Publication series

NameInternational Conference on Information Networking
ISSN (Print)1976-7684

Conference

Conference39th International Conference on Information Networking, ICOIN 2025
Country/TerritoryThailand
CityChiang Mai
Period25/1/1525/1/17

Bibliographical note

Publisher Copyright:
© 2025 IEEE.

Keywords

  • Adversarial example
  • Convolutional Neural Networks
  • Synthetic Aperture Radar
  • Transferability

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems

Fingerprint

Dive into the research topics of 'Transferability Analysis of Adversarial Examples in CNN-based SAR Image Classification'. Together they form a unique fingerprint.

Cite this