On the Robustness of Graph Reduction Against GNN Backdoor

  • Yuxuan Zhu
  • , Michael Mandulak
  • , Kerui Wu
  • , George Slota
  • , Yuseok Jeon
  • , Ka Ho Chow
  • , Lei Yu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Graph Neural Networks (GNNs) have been shown to be susceptible to backdoor poisoning attacks, which pose serious threats to real-world applications. Meanwhile, graph reduction techniques have recently emerged as effective methods for accelerating GNN training on large-scale graphs. However, the development of graph reduction techniques for large graphs does not yet address how these methods might interact with the potential risks of data poisoning attacks against GNNs, particularly in relation to existing backdoor attacks. This paper conducts a thorough examination of the robustness of graph reduction methods in scalable GNN training in the presence of state-of-the-art backdoor attacks. We performed a comprehensive robustness analysis across six coarsening methods and six sparsification methods for graph reduction, under three GNN backdoor attacks against three GNN architectures. Our findings indicate that the effectiveness of graph reduction methods in mitigating attack success rates varies significantly, with some methods even exacerbating the attacks. Through detailed analyses of triggers and poisoned nodes, we interpret our findings and enhance our understanding of how graph reduction influences robustness against backdoor attacks. These results highlight the critical need for incorporating robustness considerations in graph reduction for scalable GNN training.

Original languageEnglish
Title of host publicationAISec 2024 - Proceedings of the 2024 Workshop on Artificial Intelligence and Security, Co-Located with
Subtitle of host publicationCCS 2024
PublisherAssociation for Computing Machinery, Inc
Pages65-76
Number of pages12
ISBN (Electronic)9798400712289
DOIs
Publication statusPublished - 2024 Nov 22
Externally publishedYes
Event16th ACM Workshop on Artificial Intelligence and Security, AISec 2024, co-located with CCS 2024 - Salt Lake City, United States
Duration: 2024 Oct 142024 Oct 18

Publication series

NameAISec 2024 - Proceedings of the 2024 Workshop on Artificial Intelligence and Security, Co-Located with: CCS 2024

Conference

Conference16th ACM Workshop on Artificial Intelligence and Security, AISec 2024, co-located with CCS 2024
Country/TerritoryUnited States
CitySalt Lake City
Period24/10/1424/10/18

Bibliographical note

Publisher Copyright:
© 2024 Copyright held by the owner/author(s).

Keywords

  • Coarsening
  • Graph Backdoor
  • Graph Neural Network
  • Graph Reduction
  • Sparsification
  • Trustworthy AI

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Networks and Communications
  • Software

Fingerprint

Dive into the research topics of 'On the Robustness of Graph Reduction Against GNN Backdoor'. Together they form a unique fingerprint.

Cite this