Giter Club home page Giter Club logo

cana's Introduction

Adversarial Camouflage for Node Injection Attack on Graphs

This repository is our Pytorch implementation of our paper:

Adversarial Camouflage for Node Injection Attack on Graphs pulished in Information Sciences (IF=8.233)

By Shuchang Tao, Qi Cao, Huawei Shen, Yunfan Wu, Liang Hou, Fei Sun, and Xueqi Cheng

Introduction

In this paper, we find that the malicious nodes generated by existing node injection attack methods are prone to failure in practical situations, since defense and detection methods can easily distinguish and remove the injected malicious nodes from the original normal nodes.

Figure 1 shows the distribution of attributes of injected nodes and original normal nodes for state-of-the-art node injection attack methods, i.e., G-NIA [39] and TDGIA [59], and heuristic imperceptible constraint HAO. Node attributes of injected nodes (red) look different from the normal ones (blue). The defects weaken the effectiveness of such attacks in practical scenarios where defense/detection methods are commonly used.

motivation

CANA

We first formulate the camouflage on graphs as the distribution similarity between the ego networks centering around the injected nodes and the ego networks centering around the normal nodes, characterizing both network structures and node attributes. Then we propose an adversarial camouflage framework for node injection attacks, namely CANA, to improve the camouflage of injected nodes through an adversarial paradigm. CANA is a general framework, which could be attached to any existing node injection attack methods (G), improving node camouflage while inheriting the performance of existing node injection attacks.

Further details can be found in our paper.

CANA

Results

Extensive experiments demonstrate that CANA can significantly improve the attack performance under defense/detection methods with higher camouflage or imperceptibility.

results

Datasets and splits

Download ogbarxiv, ogbproducts (the subgraph in our paper), Reddit (the subgraph in our paper) from Here.

Unzip the datasets_CANA.zip and put the folder datasets in the root directory.

Environment

  • Python >= 3.6
  • pytorch >= 1.6.0
  • scikit-learn >= 0.24.2
  • matplotlib >= 3.3.4
  • pyod >= 1.0.4
  • scipy==1.5.4
  • pandas >= 1.15

Reproduce the results

  • Inject nodes and Generate the attacked graphs by CANA

    • Running scripts and parameters for all the datasets are given in PGD+CANA/run.sh, TDGIA+CANA/run.sh, GNIA+CANA/run.sh

      Example Usage:

      cd GNIA+CANA
      mkdir logs
      nohup python -u run_gnia_cana.py --dataset ogbproducts --suffix cana --alpha 0.5 --beta 0.01 --Dopt 10 --lr_G 1e-3 --lr_D 1e-3 --gpu 0 > logs/ogbproducts_cana.log 2>&1 &
      

      Put the attacked graphs (e.g., GNIA+CANA/new_graphs/ogbproducts_cana.npz) into the directory final_graphs/ogbproducts.

    • Please note that you can also directly download attacked graphs used in our paper from Here. Unzip final_graphs.zip, and put the final_graphs folder in the root directory.

  • Evaluate the attack performance by detection and defense methods

    Running scripts and parameters for all the datasets are given in defense_detection/Detection/run.sh, defense_detection/FLAG/run.sh, defense_detection/GNNGuard/run.sh

    • Detections

      • Use the attacked graphs downloaded from the above link. Example usage:

        cd defense_detection/Detection
        mkdir logs
        nohup python -u eval_detect.py --suffix final  --gpu 0 --dataset ogbproducts > log/ogbproducts_final.log 2>&1 &  
        

        The accuracy can be found in logs/ogbproducts/ogbproducts_final.csv.

      • Use the generated attacked graphs. Example usage:

        cd defense_detection/Detection
        mkdir logs
        nohup python -u eval_detect.py --suffix attacked  --gpu 0 --dataset ogbproducts > log/ogbproducts_attacked.log 2>&1 &  
        

        The accuracy can be found in logs/ogbproducts/ogbproducts_attacked.csv.

    • FLAG

      Train FLAG model and Evaluate the attacked graphs by FLAG model:

      cd defense_detection/FLAG
      mkdir logs
      CUDA_VISIBLE_DEVICES=0 nohup python -u run_flag.py --dropout 0.3 --perturb_size 0.01 --dataset ogbproducts  --suffix final > logs/ogbproducts.log 2>&1 &
      
    • GNNGuard

      Train GNNGuard model and Evaluate the attacked graphs by GNNGuard model:

      cd defense_detection/GNNGuard
      mkdir logs
      CUDA_VISIBLE_DEVICES=0 nohup python -u run_gnnguard.py --dataset ogbproducts  --dropout 0.3 --suffix final > logs/ogbproducts_final.log 2>&1 &
      

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.