Giter Club home page Giter Club logo

chaitjo / geometric-gnn-dojo Goto Github PK

View Code? Open in Web Editor NEW
444.0 10.0 43.0 4.71 MB

Geometric GNN Dojo provides unified implementations and experiments to explore the design space of Geometric Graph Neural Networks.

Home Page: https://arxiv.org/abs/2301.09308

License: MIT License

Jupyter Notebook 58.85% Python 41.15%
geometric-deep-learning graph-neural-networks equivariance equivariant-networks geometric-graphs graph-isomorphism pytorch pytorch-geometric e3nn

geometric-gnn-dojo's Introduction

⚔️ Geometric GNN Dojo

Geometric GNN Dojo is a pedagogical resource for beginners and experts to explore the design space of Graph Neural Networks for geometric graphs.

Check out the accompanying paper 'On the Expressive Power of Geometric Graph Neural Networks', which studies the expressivity and theoretical limits of geometric GNNs.

Chaitanya K. Joshi*, Cristian Bodnar*, Simon V. Mathis, Taco Cohen, and Pietro Liò. On the Expressive Power of Geometric Graph Neural Networks. International Conference on Machine Learning.

PDF | Slides | Video

New to geometric GNNs: try our practical notebook on Geometric GNNs 101, prepared for MPhil students at the University of Cambridge.

Open In Colab (recommended!)

Architectures

The /models directory provides unified implementations of several popular geometric GNN architectures:

Experiments

The /experiments directory contains notebooks with synthetic experiments to highlight practical challenges in building powerful geometric GNNs:

  • kchains.ipynb: Distinguishing k-chains, which test a model's ability to propagate geometric information non-locally and demonstrate oversquashing with increased depth/longer chains.
  • rotsym.ipynb: Rotationally symmetric structures, which test a layer's ability to identify neighbourhood orientation and highlight the utility of higher order tensors in equivariant GNNs.
  • incompleteness.ipynb: Counterexamples from Pozdnyakov et al., which test a layer's ability to create distinguishing fingerprints for local neighbourhoods and highlight the need for higher body order of local scalarisation (distances, angles, and beyond).

Installation

# Create new conda environment
conda create --prefix ./env python=3.8
conda activate ./env

# Install PyTorch (Check CUDA version for GPU!)
#
# Option 1: CPU
conda install pytorch==1.12.0 -c pytorch
#
# Option 2: GPU, CUDA 11.3
# conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch

# Install dependencies
conda install matplotlib pandas networkx
conda install jupyterlab -c conda-forge
pip install e3nn==0.4.4 ipdb ase

# Install PyG (Check CPU/GPU/MacOS)
#
# Option 1: CPU, MacOS
pip install torch-scatter torch-sparse torch-cluster torch-spline-conv -f https://data.pyg.org/whl/torch-1.12.0+cpu.html 
pip install torch-geometric
#
# Option 2: GPU, CUDA 11.3
# pip install torch-scatter torch-sparse torch-cluster torch-spline-conv -f https://data.pyg.org/whl/torch-1.12.1+cu113.html
# pip install torch-geometric
#
# Option 3: CPU/GPU, but may not work on MacOS
# conda install pyg -c pyg

Directory Structure and Usage

.
├── README.md
|
├── geometric_gnn_101.ipynb             # A gentle introduction to Geometric GNNs
| 
├── experiments                         # Synthetic experiments
|   |
│   ├── kchains.ipynb                   # Experiment on k-chains
│   ├── rotsym.ipynb                    # Experiment on rotationally symmetric structures
│   ├── incompleteness.ipynb            # Experiment on counterexamples from Pozdnyakov et al.
|   └── utils                           # Helper functions for training, plotting, etc.
| 
└── models                              # Geometric GNN models library
    |
    ├── schnet.py                       # SchNet model
    ├── dimenet.py                      # DimeNet model
    ├── spherenet.py                    # SphereNet model
    ├── egnn.py                         # E(n) Equivariant GNN model
    ├── gvpgnn.py                       # GVP-GNN model
    ├── tfn.py                          # Tensor Field Network model
    ├── mace.py                         # MACE model
    ├── layers                          # Layers for each model
    └── modules                         # Modules and layers for MACE

Contact

Authors: Chaitanya K. Joshi ([email protected]), Simon V. Mathis ([email protected]). We welcome your questions and feedback via email or GitHub Issues.

Citation

@inproceedings{joshi2023expressive,
  title={On the Expressive Power of Geometric Graph Neural Networks},
  author={Joshi, Chaitanya K. and Bodnar, Cristian and  Mathis, Simon V. and Cohen, Taco and Liò, Pietro},
  booktitle={International Conference on Machine Learning},
  year={2023},
}

geometric-gnn-dojo's People

Contributors

chaitjo avatar croydon-brixton avatar eltociear avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

geometric-gnn-dojo's Issues

Geometric GNNs 101

Hello~ I'm a beginner for equivariant deep learning. Could you offer an answer for Geometric GNNs 101? Some blanks still seem a little hard. Thank you!

Ask for clarification on comments in the aggregate function

First of all, thank you for this wonderful resource. Could you please clarify the following part of the comment in the aggregate function?

inputs: (e, d) - messages `m_ij` from destination to source nodes

If I understood the logic, I would expect that the defined messages would go from sources (j nodes) to destination (i nodes). Could you elaborate more this point?

Here is the code of the aggregate function:

def aggregate(self, inputs, index):
        """Step (2) Aggregate

        The `aggregate` function aggregates the messages from neighboring nodes,
        according to the chosen aggregation function ('sum' by default).

        Args:
            inputs: (e, d) - messages `m_ij` from destination to source nodes
            index: (e, 1) - list of source nodes for each edge/message in `input`

        Returns:
            aggr_out: (n, d) - aggregated messages `m_i`
        """
        return scatter(inputs, index, dim=self.node_dim, reduce=self.aggr)

Thank you!

RuntimeError: Not compiled with CUDA support in DimeNet.

Hi!
When I run DimeNet, In k-chian experiments, I have the following error: RuntimeError: Not compiled with CUDA support.
Also, in line 79 you access to self.triplet, but there is no such variable.
I would like your support.
Thanks!

Why equivariant and not invariant GNN layers?

Thank you so much for the tutorial. As someone not a part of Cambridge, this tutorial is extremely useful to get started with this domain.

I had a question regarding one of the statements in the notebook,

why did we build permutation invariant GNN models composed of permutation equivariant GNN layers?
The answer is that permutation equivariant GNN layers enable the model to better leverage the relational structure of the underlying nodes, as well as construct more powerful node representations by stacking several layers of these permutation equivariant operations. (You can try running a DeepSets model for QM9 yourself and see the performance reduce.)

I was curious about why the relational structure is important because the ordering of the nodes does not signify any meaning right? I might be mistaken. In any case, I would be glad if you could clarify this or direct me to any useful resource.

Something Wrong for MACE in Four-body chiral counterexample

Choose "mace": partial(MACEModel, correlation=correlation, hidden_irreps=e3nn.o3.Irreps(f'32x0e + 32x0o + 32x1e + 32x1o + 32x2e + 32x2o')) in Four-body chiral counterexample, then it return that local variable 'last_ir' referenced before assignment.
I find that in './models/mace_modules/cg.py', it seems that the program will enter the brunch if first ir in wigners is not wigners[0][0]?
The environment is

  • PyTorch version 2.2.0+cu121
  • PyG version 2.4.0
  • e3nn version 0.5.1
  • Using device: cuda
    image
    image

Kernel crash on remote server

Thanks for creating a wonderful notebook.

After installing and cloning the repo on a remote server (following your instructions), I tried running the notebook. But I find that the kernel crashes at the cell where I load the QM9 dataset.

Any idea why this may be the case? Perhaps because it's too huge for my server to handle or am I missing something else?

Regards,
Madhav

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.