Giter Club home page Giter Club logo

rng-kbqa's Issues

How is the entity linking F1 calculated?

Hi there,
I've noticed that in the paper appendix, there's an entity linking evaluation.
I would like to ask how does the linking F1 gets calculated? Is the evaluation in this repo?

Since there're some questions in GrailQA that have no golden entity, and it is definitely possible that the linking result is empty for a question, how does the F1 calculation handles these problems?

Thanks for your reply.

About knowledge graph

Hi,
Can you suggest how I can implement this KBQA model on my own knowledge graph dataset?
I have tried to load the RDF file into the virtuoso server directly, but the model seems not to detect the graph.
Do I need to do any configuration on my RDF file?
Or is it the problem related to entity linking? If so, what can I do to make the KBQA model work on my knowledge graph dataset?

Is it possible to provide results of entity linking?

RnG-KBQA's entity linking are a further enhancement to GrailQA implementation.
Currently, entity linking is a relatively independent step, and significantly improving the effectiveness of entity linking is usually difficult.
Would the authors be willing to publish the results of entity linking?

This may be of great help for subsequent studies.
Many thanks.

urllib.error.URLError

    PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
    PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
    PREFIX : <http://rdf.freebase.com/ns/> 
    SELECT (?x0 AS ?label) WHERE {
    SELECT DISTINCT ?x0  WHERE {
    :m.03_r3 rdfs:label ?x0 . 
                        FILTER (langMatches( lang(?x0), "EN" ) )
                         }
                         }

Reading: 0%|

except urllib.error.URLError:
    print(query)
    exit(0)

After throwing this exception, the program will exit directly. These three websites seem to be inaccessible. What is the situation? Is there an alternative if it is not accessible?

Question about WebQSP evaluation

Hi!

The WebQSP eval.py file will generate 2 F1 scores: "Average f1 over questions (accuracy)" and "F1 of average recall and average precision". May I know which one are you showing?

Thanks.

How to process the mids which can't convert to string name?

Hi,
When I follow this excellent work, I encounter some problems: I can't convert some mids to the string type name. I use your get_name script and get some triples output. But there is not any helpful string name information. How should I process this problem? Thanks!


Here are some example of mids and their one-hop triples by executed search on the freebase:

m.0gxnnwp : [['m.0gxnnwp', 'http://www.w3.org/1999/02/22-rdf-syntax-ns#type', 'people.sibling_relationship'], ['m.0gxnnwp', 'type.object.type', 'people.sibling_relationship'], ['m.0gxnnwp', 'people.sibling_relationship.sibling', 'm.06w2sn5'], ['m.0gxnnwp', 'people.sibling_relationship.sibling', 'm.0gxnnwq']]
m.0855mj_ : [['m.0855mj_', 'http://www.w3.org/1999/02/22-rdf-syntax-ns#type', 'film.performance'], ['m.0855mj_', 'type.object.type', 'film.performance'], ['m.0855mj_', 'film.performance.actor', 'm.09l3p'], ['m.0855mj_', 'film.performance.film', 'm.062zjtt'], ['m.0855mj_', 'film.performance.character', 'm.0dttll']]
m.04g55p8: [['m.04g55p8', 'http://www.w3.org/1999/02/22-rdf-syntax-ns#type', 'common.topic'], ['m.04g55p8', 'http://www.w3.org/1999/02/22-rdf-syntax-ns#type', 'user.dfhuynh.default_domain.assassination'], ['m.04g55p8', 'type.object.type', 'common.topic'], ['m.04g55p8', 'type.object.type', 'user.dfhuynh.default_domain.assassination'], ['m.04g55p8', 'user.dfhuynh.default_domain.assassination.assassinated_person', 'm.0d3k14'], ['m.04g55p8', 'user.dfhuynh.default_domain.assassination.assassin', 'm.0bgl08'], ['m.04g55p8', 'user.dfhuynh.default_domain.assassination.date', '1960-12-11-08:00'], ['m.04g55p8', 'user.dfhuynh.default_domain.assassination.location', 'm.0rqf1'], ['m.04g55p8', 'user.dfhuynh.default_domain.assassination.method', 'm.04g56gm'], ['m.04g55p8', 'user.dfhuynh.default_domain.assassination.outcome', 'm.04g5679']]

Training of Mention Detection Model (BERT-NER)

I would like to train the "mention detection model" on the GrailQA dataset, which can be done using /GrailQA/entity_linker/BERT_NER/run_ner.py. But it also expects GrailQA dataset in CoNLL-2003 format (i.e., question tokens tagged with ["B", "I", "O", "[CLS]", "[SEP]"] tags) which is not present in the repository.
So would you please share the processed GrailQA dataset (train and valid split) or script for converting the dataset into the required format?
Can you please confirm that the BERT-NER model has been trained with the parameters mentioned in /GrailQA/entity_linker/BERT_NER/run_ner.py ?

it takes too long when I run 'run_disamb.sh'

It seems to take me more than ten days to run this program, I sincerely hope you can give me some help.

/2023 15:48:51 - WARNING - __main__ -   Process rank: -1, device: cuda, n_gpu: 1, distributed training: False
05/15/2023 15:49:39 - INFO - __main__ -   Training/evaluation parameters Namespace(adam_epsilon=1e-08, bootstrapping_start=None, bootstrapping_ticks=None, cache_dir='./hfcache', config_name='', data_dir=None, dataset='grail', device=device(type='cuda'), disable_tqdm=False, do_eval=True, do_lower_case=True, do_predict=True, do_train=False, eval_all_checkpoints=False, eval_steps=500, evaluate_during_training=False, gradient_accumulation_steps=1, learning_rate=5e-05, linear_method='vanilla', local_rank=-1, logging_steps=500, max_grad_norm=1.0, max_seq_length=96, max_steps=-1, model_name_or_path='checkpoints/grail_bert_entity_disamb', model_type='bert', n_gpu=1, no_cuda=False, num_contrast_sample=20, num_train_epochs=3.0, output_dir='results/disamb/grail_train', overwrite_cache=False, overwrite_output_dir=True, per_gpu_eval_batch_size=128, per_gpu_train_batch_size=8, predict_file='outputs/grail_train_entities.json', save_steps=500, seed=42, server_ip='', server_port='', threads=1, tokenizer_name='', train_file=None, training_curriculum='random', verbose_logging=False, warmup_ratio=0.0, warmup_steps=0, weight_decay=0.0)
05/15/2023 15:50:19 - INFO - __main__ -   Loading checkpoint checkpoints/grail_bert_entity_disamb for evaluation
05/15/2023 15:51:31 - INFO - __main__ -   Evaluate the following checkpoints: ['checkpoints/grail_bert_entity_disamb']
05/15/2023 15:51:52 - INFO - __main__ -   Creating features from dataset file at .
Read Exapmles:   1%|          | 367/44337 [2:24:05<389:09:16, 31.86s/it]

Format of the ranking candidates file for GrailQA

Hi. I am interested in the ranker part of this project. I am currently setting up the environment. However, looks like the previous steps could be time consuming. Can I get some quick information on the format of the output files for:

python enumerate_candidates.py --split train # we use gt entity for trainning (so no need for prediction on training)
python enumerate_candidates.py --split dev --pred_file misc/grail_dev_entity_linking.json

Thanks!

Question about knowledge base

Hi!

Great work! May I know what knowledge base are you using for this paper? Freebase? But ELQ is based on Wikidata right?

How long does enumerate candidates need?

Hello,

I was wondering how long it will take to get the candidates for train data? I have been running it for 4 hours on a single cpu. If i understand it correctly, the most time-consuming part is edit distance part. Would you like to provide the results for training. Thanks in advance!

Best,
Haishuo

Is there any release for query enumeration and ranking results?

Hi there,
I'm currently interested in reproducing the results of RnG-KBQA.
Dev set is relatively small.
It workedfine for me.
However, training set might be too big for enumerating candidates, maybe even a few days for this single step?
I wonder is there a way to share files for the enumeration of candidate queries in training set, and maybe the ranking results?

Thanks a lot.

version of transformers

Hi Xi,

Thanks for open-source this awesome work!
May I know what's the version you install for transformers? I saw pytorch-transformers==1.1.0 in requirements.txt but I have a package missing problem when importing transformers in run_ranker.py and run_generator.py. I tried transformers==3.4.0 and transformers==4.16.0 but they all have some problems.

WebQSP reproduction help

Hi~!
I have encountered some problems trying to reproduce your great work on the WebQSP dataset. I have meticulously followed the steps in the README.md, but the final results show significant discrepancies, as shown in the attached images. Could you please point out the most likely causes of the error? If you need more detailed information about my replication environment, please let me know. Thank you very much for your help.
Ex
result

GrailQA retrain ranker, use specific gpus

Hi! When I run python enumerate_candidates.py --split train , it seems that it runs well when training.
But when the process step into evaluation, I found that all 8 gpus are used. What should I do to use specific 2 or 3 specific gpus?
I designated the device_ids in code, but it seems that it didn't work.
Thank you!

Something wrong with reproduction

when I use walk_through_demo.sh, it will stuck at entity disambiguating. I do not know what is going wrong, which seems can not read examples:
image

could you help me @xiye17 ? thx!

About training the ranker

--per_gpu_train_batch_size 1 \

Hi,
I would like to ask the batch size of the ranker (not entity disambiguation).
In the paper, the batch size is 8. However, the script here is 1.

Besides, when evaluating (predicting) the LF ranking, should the batch size be 1 according to BERTCandidateRanker:

# for testing, batch size have to be 1

But the script set the evaluation batch size to 128 as shown in,

--per_gpu_eval_batch_size 128 | tee "${exp_prefix}log.txt"

Are these two numbers have different meanings?
Would you please provide me with a clue? Thanks a lot.

other dataset

Hi,
Congratulations on such interesting work. Existed research work always considered the CWQ and WebQSP datasets. However, you only test the WebQSP in your paper. I want to know: (1) if you have conducted the experiment on CWQ to verify your method performance; (2) if not, how to conduct the experiment on this new dataset? I think it is should similar to the WebQSP. Looking forward to your reply.

the file in ontology

Thank you very much for your code sharing, I don't quite understand what these files in ontology mean?
domain_dict, domain_info ,fb_roles ,fb_types ,full_reverse_properties.json ,reverse_properties do they have any connection with FBbase?

Why the space occupied by virtuoso.db gradually becomes larger?

Hi,
Excuse me. According to your guidelines for configuring the Freebase environment, I have downloaded the virtuoso.db and configured the environment. After successfully running the virtuoso.py to start the service of virtuoso, I use the SPARQLWrapper to query the freebase, which is written in your code. And I find, after some query operation, the desk space occupied by the virtuoso.db file has increased from ~50G (the initial downloaded file) to ~140G. Using the command ll, I find the latest change time for this file is updated for the latest time, which means it is modified along with the using process. But I don't know why this happened? Is there some query cache? And how should I solve this problem? I hope you can help me.
I am looking forward to your reply.

Thanks!
jinhao.

Something wrong with reproduction

I am very interested in your project and thank you for your code. but when I use Reproducing the Results on GrailQA/Step by Step Instructions/(ii) Disambiguating Entities (Entity Linking), it will stuck at entity disambiguating. I do not know what is going wrong, which seems something goes wrong when running the code: "logits = model(**inputs)[1]", can you help me?

1386193464

Experiment Reproduction Help

I deployed the environment as instructed. The result obtained when testing the demo script is only 0.72, which is lower than the expected 0.86.
In addition, when I did the follow-up test, the 32g ram of the computer did not seem to be enough. When I used ranker to test the dev data, the process would be killed.
Could you please give me some advice to reproduce the results?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.