Giter Club home page Giter Club logo

dgi's People

Contributors

petarv- avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dgi's Issues

Corruption function only on node features X not graph structure A?

Hello, the paper said: "an explicit (stochastic) corruption function (\tilde{X}, \tilde{A}) = C(X, A)."
However, in the code, I only find the corruption on node attributes:
idx = np.random.permutation(nb_nodes)
shuf_fts = features[:, idx, :]

I can not find the corruption on the graph structure, why? Does this not affect the final result?

thanks

Out of Memory on Pubmed Dataset

I tried to run the released execute.py on Pubmed. However, it seems that it takes 19.25 GB during back propagation.

Is this the correct behavior? Is there any solution to bypass this problem and replicate the paper reported number?

Help!

I saw the DGI implementation which only contains Cora dataset on your github

I was wondering whether you could kindly share your DGI implementation which contains Reddit and PPI dataset!

Thank you very much!

The time cost on Reddit

Hi Petar,
Can you provide the time cost on Reddit and PPI datasets?
Now I am doing some work to scale up Graph Auto Encoder, and I wonder how efficient is this method?

Thanks

Mean accuracy on fixed representation

HI, great work ! But I have a question regarding how to repeat the experiments.
While getting the mean accuracy, you used the fixed representation and only repeated the logistic regression part. Shall we repeat the whole pretext training + downstream task altogether ?
Or is there any reference for the reason of doing so ?
Many thanks

Question about PPI dataset

Hi,

Thank you for making the code available. I would like to ask a question about a remark you made in your paper about the PPI dataset. On page 8, paragraph "Inductive learning on multiple graphs", you noted that 40% of the node has all-zero feature vector. However, the feature vectors loaded using GraphSAGE (http://snap.stanford.edu/graphsage/) is dense. Did you use a different set of feature vectors or a different PPI dataset?

Thank you for your time! If I misunderstood something, please kindly point out my mistake.

Edit: Sorry I made a mistake when I check the feature vectors. It was indeed 42% all zeros.

DATA

What is the kind of your dataset(cora).
How can i make my graph like this.
thank you very much.

Codes on reddit dataset

Great work! I really enjoy reading the paper.

However, will the codes to replicate the reported performance on reddit be released? If so, is there a planned schedule?

Thank you!

Why do we need to calculate for expectation before sum

   Hello, I've read your wonderful paper published on ICLR, and I'd like to consult you some questions. 
   The two summation symbols in the objective function sum the positive and negative samples and find the average value. Why do you need to calculate the expectation before sum?
   thank you!

About the meaning of learned features

Hi, I was wondering that the learned representations tend to conserve their unique information or common information. Maximizing Mutual information between patch and summary vector is to find more information between them, but the discriminator wants to distinguish the samples. So, I am confused.

Error in AvgReadout

Hi, I think there's an error in AvgReadout with a mask. The mask summation should be performed along the first dimension only.

return torch.sum(seq * msk, 1) / torch.sum(msk)

It is
return torch.sum(seq * msk, 1) / torch.sum(msk)
but should be
return torch.sum(seq * msk, 1) / torch.sum(msk, 1) # Note the dimension here.

sigmoid function is missing in layers/discriminator.py

It seems that the sigmoid function is missing in layers/discriminator.py[line30].
Explained in your paper, the logistic sigmoid nonlinearity is used to convert scores into probabilities of (h_i, s) being a positive example.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.