tech-srl / how_attentive_are_gats Goto Github PK
View Code? Open in Web Editor NEWCode for the paper "How Attentive are Graph Attention Networks?" (ICLR'2022)
Code for the paper "How Attentive are Graph Attention Networks?" (ICLR'2022)
Thanks for your great contribution!!
I'm confused about Figure 1 (a) in your paper. Which layer of GAT is this attention matrix in? Is the attention matrix of all layers the same? Is the attention matrix between different heads in one layer like this?
Best regards
Sorry for bothering again.
This might be dumb, but I am reading the DGL implementation to understand it, and I really don't get L272.
how_attentive_are_gats/gatv2_conv_DGL.py
Lines 266 to 272 in b5ccb61
The only thing that makes sense to me is that the rhs was supposed to be feat_dst
as well, and this is some bug. Otherwise, it does not make sense to set share_weights=False
when graph.is_block
is True. Am I missing something?
Thank you for your great contribution!
Currently, we are doing research with GAT, GAT-V2 (and Graph Transformers) and try to reproduce your results on the ogbn-datasets.
Do you already know when you will publish the rest of the code?
Best regards
Hi!
I was trying to reproduce the results on the proteins
dataset on a V100 GPU, and I am running into a few problems. First I had some issues with the BatchSampler (samplers cannot be passed to iterative datasets), so I just removed it (I am using the latest version of dgl, since 0.6 is not available with my CUDA version).
After fixing that, I came across this error message
ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 64])
in line 98 of models.py
how_attentive_are_gats/proteins/models.py
Lines 90 to 100 in b5ccb61
Is this error related with removing the BatchSampler? And which tensor size is it expected at that line?
Thanks for the help
The performance of GAT in origin paper is outperform than GraphSage in PPI. Why not you compare GATv2 on PPI?
In this repo,there is no method to draw Figure 1 in this paper.
Hi, I am using the DGL based code of the project. When I set num_heads = 2
to initialize the GATv2Conv layers I receive the following error
dgl._ffi.base.DGLError: Expect number of features to match number of nodes (len(u)). Got 568 and 284 instead
I guess the attention heads are not being aggregated? I noticed the DGL version does not have an is_concat
option like in the annotated code demo
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.