Comments (10)
I splitted the noun_closure.csv into training and test set, I run embed.py on training set and run my evaluation code on test set to test the model. Am I doing it right?
Assuming there aren't any root/leaf nodes in your validation/test sets, then yes.
When comparing two different models, sometimes one got higher mean rank and higher MAP while the other got lower mean rank and lower MAP. Intuitively, when a model get lower mean rank, it should get higher MAP at the same time. Does this relation always hold?
This relation does not necessarily hold, but is often the case.
from poincare-embeddings.
Rank refers to ranking the neighbors of a given node against it's non-neighbors in the embedding space. A lower mean rank is better, whereas a higher mAP rank is better.
Or maybe there is something wrong with my evaluation code :(
This evaluation should take place after each training epoch. Why do you need to reimplement this?
from poincare-embeddings.
Thanks for your reply.
I didn't figure out how to train on training set while test on test set with train-nouns.sh. I checked the embed.py and it seems that the code trains and evaluates on the same dataset, so I reimplement the evaluation code to run on test data. After spliting the noun_closure.csv into training and test set, I run embed.py on training set and run my code on test set to test the model. Am I doing it wrong?
When I test different models, sometimes mean rank is lower and mAP rank is higher, which indicates better capacity of inferring. However when comparing two different models, sometimes one got higher mean rank and higher MAP while the other got lower mean rank and lower MAP. Intuitively, when a model get lower mean rank, it should get higher MAP at the same time. Does this relation always hold?
from poincare-embeddings.
Assuming you are trying to reproduce the results of Table 1 in this paper, it is correct that you should evaluate on your training set. The point of this evaluation is to show the differences in capacity of the models by seeing how well we are able to represent the original graph in the embedding space.
from poincare-embeddings.
Thanks for the quick reply.
I am trying to reproduce the WordNet Link Prediction results of Table 1.
Should I use embed.py and evaluate on my training set?
from poincare-embeddings.
Correct.
from poincare-embeddings.
So both the Link Prediction and Reconstruction result in table 1 are from evaluation result on training set?
from poincare-embeddings.
Ahh sorry, I wasn't looking carefully enough and thought Table 1 was just reconstruction (and didn't include link prediction). The reconstruction portion of Table 1 uses the entire graph for both training and evaluation. The link prediction does split into train/validation/test sets. Note that as mentioned in the paper:
The validation and test set do not include links
involving root or leaf nodes as these links would either be trivial or impossible to predict reliably.
from poincare-embeddings.
Sorry to bother you again and thanks for your patience. I still got two questions below:
- I splitted the noun_closure.csv into training and test set, I run embed.py on training set and run my evaluation code on test set to test the model. Am I doing it right?
- When comparing two different models, sometimes one got higher mean rank and higher MAP while the other got lower mean rank and lower MAP. Intuitively, when a model get lower mean rank, it should get higher MAP at the same time. Does this relation always hold?
from poincare-embeddings.
Got it. Thanks a lot!
from poincare-embeddings.
Related Issues (20)
- The distance function of euclidean manifold is wrong.
- Error when running NIPS 2017 Release HOT 2
- What is the output after learning? HOT 2
- [Question] Running Inference on New Nodes HOT 1
- euclidean embedding HOT 3
- [Question] Predicting the parent of an unseen word in an existing hierarchy HOT 4
- Restore from checkpoints and train?
- NIPS results not reproducible with this code.
- ValueError: Buffer dtype mismatch, expected 'long_t' but got 'long' HOT 1
- Why the Euclidean gradient omit the first part ? HOT 2
- stop when eval HOT 2
- How to get embeddings? HOT 1
- Hyperparameters for reproduction HOT 1
- mammal_closure.csv not found HOT 1
- Entailment cones compute the wrong angle? HOT 3
- What should we do after training? HOT 4
- Is there any filter file for plants and vehicles subtrees like 'mammals_filter.txt'? Could you please share these files?
- how was mammal_closure.csv created HOT 1
- Large dataset that needs continuous training HOT 1
- Problems related to constrain the embeddings to remain within the Poincaré ball via the projection
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from poincare-embeddings.