Hi there ๐, I'm Jinheon Baek.
I'm a Ph.D. student in the Graduate school of AI at KAIST (MLAI Lab), advised by Prof. Sung Ju Hwang.
Official Code Repository for the paper "Learning to Extrapolate Knowledge: Transductive Few-shot Out-of-Graph Link Prediction" (NeurIPS 2020)
Home Page: https://arxiv.org/abs/2006.06648
I'm a Ph.D. student in the Graduate school of AI at KAIST (MLAI Lab), advised by Prof. Sung Ju Hwang.
Thanks for your excellent work on OOG link prediction. Could you please tell me how to get pre-trained embeddings for my own dataset with entities and relations?
Hi there,
Firstly, thank you for your work.
I have read your paper and I have a few questions to clarify.
Mainly, how do you "... formulate a set of tasks such that the model
learns to generalize over unseen entities, which are simulated using seen entities." Your paper also mentioned sampling a task from the distribution p(T) but how is p(T) obtained? Is it predefined?
In other words, in the aspect of code, how did you pre-process your data such that it is split into meta-train/meta-valid/meta-test triplets?
self.filtered_triplets, self.meta_train_task_triplets, self.meta_valid_task_triplets, self.meta_test_task_triplets, \
self.meta_train_task_entity_to_triplets, self.meta_valid_task_entity_to_triplets, self.meta_test_task_entity_to_triplets \
= utils.load_processed_data('./Dataset/processed_data/{}'.format(args.data))
Other than that, do you mind elaborating on "our meta-learning framework can simulate the unseen entities during meta-training." cause I am still a confused by how your model works.
Thanks!!!
Hi,
thanks for your great work first!
I have a question about the pre-trained embedding generated with Distmult. Do we need to ignore "unseen entities" when we pre-train, i.e., remove all the triples which contain unseen entities? Or we just put the whole KG into the pre-training process, and then mask them in inductive/transductive training?
Thanks!
Hi,
I am wondering how you train these methods in your task. As I understand, these three methods use entity pair matching and they do not use embeddings of sparse relations during training and evaluation. Do you also neglect the triple relations (relations in the triples containing unseen entities) while training them?
And also I think Gmatching, MetaR and FSRL are originally using meta-learning framework. What is the difference between, e.g., Gmatching and Gmatching*?
Hi,
Your work is very interesting. Can you explain to me the inductive and transductive setup of data while training and testing? How they are different in your setup in terms of dataset preparation for training and testing.
Thank you for your excellent work!
Unseen entities are split into (meta-)train/val/test. In your code, i found that the triplets in self.meta_train_task_entity_to_triplets
, self.meta_val_task_entity_to_triplets
and self.meta_test_task_entity_to_triplets
are disjoint respectively. Is this just a coincidence or a must? Because i have a question, if entity s is divided into train and entity o into test, the triplet (s,r,o) belongs to which? In this case, the triplets in train and test will not be disjoint.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.