Abstract
Graph Neural Networks (GNNs) have been shown to be susceptible to backdoor poisoning attacks, which pose serious threats to real-world applications. Meanwhile, graph reduction techniques have recently emerged as effective methods for accelerating GNN training on large-scale graphs. However, the development of graph reduction techniques for large graphs does not yet address how these methods might interact with the potential risks of data poisoning attacks against GNNs, particularly in relation to existing backdoor attacks. This paper conducts a thorough examination of the robustness of graph reduction methods in scalable GNN training in the presence of state-of-the-art backdoor attacks. We performed a comprehensive robustness analysis across six coarsening methods and six sparsification methods for graph reduction, under three GNN backdoor attacks against three GNN architectures. Our findings indicate that the effectiveness of graph reduction methods in mitigating attack success rates varies significantly, with some methods even exacerbating the attacks. Through detailed analyses of triggers and poisoned nodes, we interpret our findings and enhance our understanding of how graph reduction influences robustness against backdoor attacks. These results highlight the critical need for incorporating robustness considerations in graph reduction for scalable GNN training.
| Original language | English |
|---|---|
| Title of host publication | AISec 2024 - Proceedings of the 2024 Workshop on Artificial Intelligence and Security, Co-Located with |
| Subtitle of host publication | CCS 2024 |
| Publisher | Association for Computing Machinery, Inc |
| Pages | 65-76 |
| Number of pages | 12 |
| ISBN (Electronic) | 9798400712289 |
| DOIs | |
| Publication status | Published - 2024 Nov 22 |
| Externally published | Yes |
| Event | 16th ACM Workshop on Artificial Intelligence and Security, AISec 2024, co-located with CCS 2024 - Salt Lake City, United States Duration: 2024 Oct 14 → 2024 Oct 18 |
Publication series
| Name | AISec 2024 - Proceedings of the 2024 Workshop on Artificial Intelligence and Security, Co-Located with: CCS 2024 |
|---|
Conference
| Conference | 16th ACM Workshop on Artificial Intelligence and Security, AISec 2024, co-located with CCS 2024 |
|---|---|
| Country/Territory | United States |
| City | Salt Lake City |
| Period | 24/10/14 → 24/10/18 |
Bibliographical note
Publisher Copyright:© 2024 Copyright held by the owner/author(s).
Keywords
- Coarsening
- Graph Backdoor
- Graph Neural Network
- Graph Reduction
- Sparsification
- Trustworthy AI
ASJC Scopus subject areas
- Artificial Intelligence
- Computer Networks and Communications
- Software
Fingerprint
Dive into the research topics of 'On the Robustness of Graph Reduction Against GNN Backdoor'. Together they form a unique fingerprint.Cite this
- APA
- Standard
- Harvard
- Vancouver
- Author
- BIBTEX
- RIS