Giter Club home page Giter Club logo

query2box's People

Contributors

hyren avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

query2box's Issues

The difference between MRR metric of one-hop path query and link prediction

Thanks for opening code of such a good work first!

I notice in the paper the MRR of model on FB15k-237 is 0.295 (Table 4), and the MRR on FB15k-237 about 1p is 0.4 (Table 9). I want to know what's the difference between these two results (from my understanding they are the same structure of data so the model should reach the identical performance? )

Thanks a lot again.

data

Excuse me, how is the data of query built? For example, the file train-queries.pkl.

trian_triples_1c.pkl, trian_triples_2c.pkl

Hi
Thanks for sharing the code, how I can generate files like trian_triples_1c.pkl, trian_triples_2c.pkl.
Is it possible that you release the data processing code?
Thanks!

Evaluation examples using "_hard" ?

Hi Ren:

I was wondering about the evaluation/answer sets used for obtaining the designated metrics, i.e. MRR, Hits@K .

There are 2 pickled answer dictionaries for each type of chain, i.e. "test_ans_ic.pkl and test_ans_1c_hard.pkl"

According to

query2box/codes/model.py

Lines 1017 to 1021 in 99dc9f5

ans_tensor = torch.LongTensor(hard_ans_list) if not args.cuda else torch.LongTensor(hard_ans_list).cuda()
argsort = torch.transpose(torch.transpose(argsort, 0, 1) - ans_tensor, 0, 1)
ranking = (argsort == 0).nonzero()
ranking = ranking[:, 1]
ranking = ranking + 1

Only the "_hard" version of answers are used for evaluation. I wanted to clarify the meaning and origin of the "_hard" answers as I couldn't find it in the paper.

It seems that the normal answers are only used for find "false_answers" which in turn are used to filter the scores in

query2box/codes/model.py

Lines 1005 to 1015 in 99dc9f5

ans_list = list(ans)
hard_ans_list = list(hard_ans)
false_ans_list = list(false_ans)
ans_idxs = np.array(hard_ans_list)
vals = np.zeros((len(ans_idxs), args.nentity))
vals[np.arange(len(ans_idxs)), ans_idxs] = 1
axis2 = np.tile(false_ans_list, len(ans_idxs))
axis1 = np.repeat(range(len(ans_idxs)), len(false_ans))
vals[axis1, axis2] = 1
b = torch.Tensor(vals) if not args.cuda else torch.Tensor(vals).cuda()
filter_score = b*score

If possible can you elaborate a bit on this chunk ?

Make code pip installable

It would be really useful if this code were pip-installable to make its installation and usage more reproducible. Would you be willing to accept a PR for this?

Some question about the implementation of "ip" "pi", and "ui"

Thank you for providing the implementation for query2box.

I have discovered some discrepancies between the implementation of "ip" "pi", and "ui" queries and what is described in the paper, mainly regarding mismatch of the number of projections in these three queries types.

For "pi", the code is implemented like:
image
there are two projections from set 1 and one projection from set 2.
However, according to the paper, the "pi" should be one projection from set 1 and no projection conducted for set 2.

A similar case is also found at "pi" and "ui", in which there exist extra projections in code than what is described in the paper.

I was wondering whether there is some misunderstanding of the code or I have misinterpreted the results showed in the paper.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.