Giter Club home page Giter Club logo

Comments (5)

RexYing avatar RexYing commented on August 20, 2024

Does that mean you obtain higher F1 compared to what we reported in the paper? We only got ~0.6.
I'm not too sure what's going on. Potentially due to different train/test split?
We use Micro F1.

But that said, some followup papers (FastGCN, GAT etc.) seem to get significantly higher results on PPI (0.9).

from graphsage.

michibertoldi avatar michibertoldi commented on August 20, 2024

Hi, I download your code and tried the unsupervised training. When I run the ppi_eval.py strictly following the instructions, it output some F1 Score far different from the scores shown on your paper, for examples, some of them look like these
F1 score 0.6524257784214338
F1 score 0.7677407675597393
F1 score 0.7941708906589428
F1 score 0.7668356263577119
F1 score 0.8696596669080376
F1 score 0.8081100651701665
F1 score 0.7624909485879797
the only thing I changed in the code is that I replaced dict.iteritems() to dict.items(), which I think won't be the real problem. I wonder if there is something wrong? Are the scores "Micro F1" or "Macro F1" on your paper?

default parameters + mean aggregator + unsupervised training on ppi dataset(not the toy)

how did you succeed in running the unsupervised training? When I try I got "Killed" after "Done loading training data". Thanks in advice

from graphsage.

shuaishuaij avatar shuaishuaij commented on August 20, 2024

Hi, I download your code and tried the unsupervised training. When I run the ppi_eval.py strictly following the instructions, it output some F1 Score far different from the scores shown on your paper, for examples, some of them look like these
F1 score 0.6524257784214338
F1 score 0.7677407675597393
F1 score 0.7941708906589428
F1 score 0.7668356263577119
F1 score 0.8696596669080376
F1 score 0.8081100651701665
F1 score 0.7624909485879797
the only thing I changed in the code is that I replaced dict.iteritems() to dict.items(), which I think won't be the real problem. I wonder if there is something wrong? Are the scores "Micro F1" or "Macro F1" on your paper?
default parameters + mean aggregator + unsupervised training on ppi dataset(not the toy)

how did you succeed in running the unsupervised training? When I try I got "Killed" after "Done loading training data". Thanks in advice

Anymore error message? I think the "killed problem" may not come from the model. Did you change any hyper-parameter?

from graphsage.

Haicang avatar Haicang commented on August 20, 2024

Hi, I download your code and tried the unsupervised training. When I run the ppi_eval.py strictly following the instructions, it output some F1 Score far different from the scores shown on your paper, for examples, some of them look like these
F1 score 0.6524257784214338
F1 score 0.7677407675597393
F1 score 0.7941708906589428
F1 score 0.7668356263577119
F1 score 0.8696596669080376
F1 score 0.8081100651701665
F1 score 0.7624909485879797
the only thing I changed in the code is that I replaced dict.iteritems() to dict.items(), which I think won't be the real problem. I wonder if there is something wrong? Are the scores "Micro F1" or "Macro F1" on your paper?

default parameters + mean aggregator + unsupervised training on ppi dataset(not the toy)

From the history of that file, I find they used one line of code to compute the micro f1 score. Compared with implementations of other graph embedding methods, I think the previous code is what people use to get the metric.

But I'm not sure how should I compute f1 score for PPI, which has multi-labels as output.

from graphsage.

zeou1 avatar zeou1 commented on August 20, 2024

Hi, I download your code and tried the unsupervised training. When I run the ppi_eval.py strictly following the instructions, it output some F1 Score far different from the scores shown on your paper, for examples, some of them look like these
F1 score 0.6524257784214338
F1 score 0.7677407675597393
F1 score 0.7941708906589428
F1 score 0.7668356263577119
F1 score 0.8696596669080376
F1 score 0.8081100651701665
F1 score 0.7624909485879797
the only thing I changed in the code is that I replaced dict.iteritems() to dict.items(), which I think won't be the real problem. I wonder if there is something wrong? Are the scores "Micro F1" or "Macro F1" on your paper?

default parameters + mean aggregator + unsupervised training on ppi dataset(not the toy)

I am also getting much higher F1 scores for each class, and I am unsure how to calculate the single F1 score for PPI as presented in Table 1 of the paper.

from graphsage.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.