Abstract
Have you ever been exposed to advertising accounts on social networks or distributed denial-of-service (DDoS) attacks? These attacks occur as intrusions in a network. Recently, several studies have demonstrated the vulnerability of graph convolutional networks (GCNs). In other words, given an abnormal graph with perturbations from a normal graph, the performance of GCNs drops significantly. To solve this problem, we propose a causal attention graph convolutional network (CAGCN). We design a causal graph where given data are affected by an attack and utilize the causal mechanism on GCNs to cut off the bias. Specifically, we use two types of attention, node attention (NoA) and neighbor attention (NeA), and demonstrate the robustness of our model, which does not significantly degrade the performance of the model as the attack becomes stronger. In addition, to show that the causal mechanism works well for robust learning, we apply the causal mechanism to the previous study and compared it.
Original language | English |
---|---|
Article number | 126187 |
Journal | Neurocomputing |
Volume | 538 |
DOIs | |
Publication status | Published - 2023 Jun 14 |
Bibliographical note
Funding Information:This research was supported by the National Research Foundation of Korea (NRF-2019R1F1A1060250), the Korea TechnoComplex Foundation Grant (R2112651), the Korea University Grant (K2107521, K2202151), and Brain Korea 21 FOUR.
Publisher Copyright:
© 2023 Elsevier B.V.
Keywords
- Causal intervention
- Defense against adversarial attacks
- Graph convolutional networks
- Robust learning
ASJC Scopus subject areas
- Computer Science Applications
- Cognitive Neuroscience
- Artificial Intelligence