petarv- / dgi Goto Github PK
View Code? Open in Web Editor NEWDeep Graph Infomax (https://arxiv.org/abs/1809.10341)
License: MIT License
Deep Graph Infomax (https://arxiv.org/abs/1809.10341)
License: MIT License
Hi,
Thank you for making the code available. I would like to ask a question about a remark you made in your paper about the PPI dataset. On page 8, paragraph "Inductive learning on multiple graphs", you noted that 40% of the node has all-zero feature vector. However, the feature vectors loaded using GraphSAGE (http://snap.stanford.edu/graphsage/) is dense. Did you use a different set of feature vectors or a different PPI dataset?
Thank you for your time! If I misunderstood something, please kindly point out my mistake.
Edit: Sorry I made a mistake when I check the feature vectors. It was indeed 42% all zeros.
yes this is a brief cora experiment,
but the data is so different from original data set cora.
could anybody explain what they mean?
thanks a lot!
What is the kind of your dataset(cora).
How can i make my graph like this.
thank you very much.
Hi, I think there's an error in AvgReadout
with a mask. The mask summation should be performed along the first dimension only.
Line 15 in 0afce4e
It is
return torch.sum(seq * msk, 1) / torch.sum(msk)
but should be
return torch.sum(seq * msk, 1) / torch.sum(msk, 1) # Note the dimension here.
I tried to run the released execute.py
on Pubmed. However, it seems that it takes 19.25 GB during back propagation.
Is this the correct behavior? Is there any solution to bypass this problem and replicate the paper reported number?
Hi Petar,
Can you provide the time cost on Reddit and PPI datasets?
Now I am doing some work to scale up Graph Auto Encoder, and I wonder how efficient is this method?
Thanks
It seems that the sigmoid function is missing in layers/discriminator.py[line30].
Explained in your paper, the logistic sigmoid nonlinearity is used to convert scores into probabilities of (h_i, s) being a positive example.
Hello, I've read your wonderful paper published on ICLR, and I'd like to consult you some questions.
The two summation symbols in the objective function sum the positive and negative samples and find the average value. Why do you need to calculate the expectation before sum?
thank you!
Hi,
Can you please share your code for training and evaluating PPI dataset as well?
Thanks
Hello, how do you make the data set under the data folder?
great paper!
I have a question about the objective function, is it same as cross_enpty?
looking forward to your reply!
I saw the DGI implementation which only contains Cora dataset on your github
I was wondering whether you could kindly share your DGI implementation which contains Reddit and PPI dataset!
Thank you very much!
HI, great work ! But I have a question regarding how to repeat the experiments.
While getting the mean accuracy, you used the fixed representation and only repeated the logistic regression part. Shall we repeat the whole pretext training + downstream task altogether ?
Or is there any reference for the reason of doing so ?
Many thanks
Hi Petar,
I have read your idea abot Deep Graph Infomax.
While right now something really puzzles me: if I want to add features for my edges and I still want to do unsupervised or self-supervised learning. Can I just simply change the encoder and remain all the other function unchanged? Such as corruption function or readout function?
Great work! I really enjoy reading the paper.
However, will the codes to replicate the reported performance on reddit be released? If so, is there a planned schedule?
Thank you!
Hello, the paper said: "an explicit (stochastic) corruption function (\tilde{X}, \tilde{A}) = C(X, A)."
However, in the code, I only find the corruption on node attributes:
idx = np.random.permutation(nb_nodes)
shuf_fts = features[:, idx, :]
I can not find the corruption on the graph structure, why? Does this not affect the final result?
thanks
Hi, I was wondering that the learned representations tend to conserve their unique information or common information. Maximizing Mutual information between patch and summary vector is to find more information between them, but the discriminator wants to distinguish the samples. So, I am confused.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.