ALD-GCN: Graph Convolutional Networks With Attribute-Level Defense

Research output: Contribution to journalArticlepeer-review

Abstract

Graph Neural Networks(GNNs), such as Graph Convolutional Network, have exhibited impressive performance on various real-world datasets. However, many researches have confirmed that deliberately designed adversarial attacks can easily confuse GNNs on the classification of target nodes (targeted attacks) or all the nodes (global attacks). According to our observations, different attributes tend to be differently treated when the graph is attacked. Unfortunately, most of the existing defense methods can only defend at the graph or node level, which ignores the diversity of different attributes within each node. To address this limitation, we propose to leverage a new property, named Attribute-level Smoothness (ALS), which is defined based on the local differences of graph. We then propose a novel defense method, named GCN with Attribute-level Defense (ALD-GCN), which utilizes the ALS property to provide attribute-level protection to each attributes. Extensive experiments on real-world graphs have demonstrated the superiority of the proposed work and the potentials of our ALS property in the attacks.

Original languageEnglish
Pages (from-to)788-799
Number of pages12
JournalIEEE Transactions on Big Data
Volume11
Issue number2
DOIs
StatePublished - 2025

Keywords

  • Graph neural networks (GNNs)
  • adversarial attacks
  • robustness

Fingerprint

Dive into the research topics of 'ALD-GCN: Graph Convolutional Networks With Attribute-Level Defense'. Together they form a unique fingerprint.

Cite this