Giter Club home page Giter Club logo

rng-kbqa's Introduction

RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering

Authors: Xi Ye, Semih Yavuz, Kazuma Hashimoto, Yingbo Zhou and Caiming Xiong

Abstract

main figure

Existing KBQA approaches, despite achieving strong performance on i.i.d. test data, often struggle in generalizing to questions involving unseen KB schema items. Prior rankingbased approaches have shown some success in generalization, but suffer from the coverage issue. We present RnG-KBQA, a Rank-andGenerate approach for KBQA, which remedies the coverage issue with a generation model while preserving a strong generalization capability. Our approach first uses a contrastive ranker to rank a set of candidate logical forms obtained by searching over the knowledge graph. It then introduces a tailored generation model conditioned on the question and the top-ranked candidates to compose the final logical form. We achieve new state-ofthe-art results on GRAILQA and WEBQSP datasets. In particular, our method surpasses the prior state-of-the-art by a large margin on the GRAILQA leaderboard. In addition, RnGKBQA outperforms all prior approaches on the popular WEBQSP benchmark, even including the ones that use the oracle entity linking. The experimental results demonstrate the effectiveness of the interplay between ranking and generation, which leads to the superior performance of our proposed approach across all settings with especially strong improvements in zero-shot generalization.

Paper link: https://arxiv.org/pdf/2109.08678.pdf

Requirements

The code is tested under the following environment setup

  • python==3.8.10
  • pytorch==1.7.0
  • transformers==3.3.1
  • spacy==3.1.1
  • other requirments please see requirements.txt

System requirements:

It's recommended to use a machine with over 300G memory to train the models, and use a machine with 128G memory for inference. However, 256G memory will still be sufficient for runing inference and training all of the models (some tricks for saving memorry is needed in training ranker model for GrailQA).

General Setup

Setup Experiment Directory

Before Running the scripts, please use the setup.sh to setup the experiment folder. Basically it creates some symbolic links in each exp directory.

Setup Freebase

All of the datasets use Freebase as the knowledge source. Please follow Freebase Setup to set up a Virtuoso triplestore service. If you modify the default url, you may need to change the url in /framework/executor/sparql_executor.py accordingly, after starting your virtuoso service,

Reproducing the Results on GrailQA

Please use /GrailQA as the working directory when running experiments on GrailQA.


Prepare dataset and pretrained checkpoints

Dataset

Please download the dataset and put the them under outputs so as to organize dataset as outputs/grailqa_v1.0_train/dev/test.json. (Please rename test-public split to test split).

NER Checkpoints

We use the NER system (under directory entity_linking and entity_linker) from Original GrailQA Code Repo. Please use the following instructions (copied from oringinal repo) to pull related data

Other Checkpoints

Please download the following checkpoints for entity disambiguation, candidate ranking, and augmented generation checkpoints, unzip and put them under checkpoints/ directory

KB Cache

We attach the cache of query results from KB, which can help save some time. Please download the cache file for grailqa, unzip and put them under cache/, so that we have cache/grail-LinkedRelation.bin and cache/grail-TwoHopPath.bin in the place.


Running inference

Demo for Checking the Pipeline

It's recommended to use the one-click demo scripts first to test if everything mentioned above is setup correctly. If it successfully run through, you'll get a final F1 of around 0.86. Please make sure you successfully reproduce the results on this small demo set first, as inference on dev and test can take a long time.

sh scripts/walk_through_demo.sh

Step by Step Instructions

We also provide step-by-step inference instructions as below:

(i) Detecting Entities

Once having the entity linker ready, run

python detect_entity_mention.py --split <split> # eg. --split test

This will write entity mentions to outputs/grail_<split>_entities.json, we extract up to 10 entities for each mention, which will be further disambiguate in the next step.

!! Running entity detection for the first time will require building surface form index, which can take a long time (but it's only needed for the first time).

(ii) Disambiguating Entities (Entity Linking)

We have provided pretrained ranker model

sh scripts/run_disamb.sh predict <model_path> <split>

E.g., sh scripts/run_disamb.sh predict checkpoints/grail_bert_entity_disamb test

This will write the prediction results (in the form of selected entity index for each mention) to misc/grail_<split>_entity_linking.json.

(iii) Enumerating Logical Form Candidates

python enumerate_candidates.py --split <split> --pred_file <pred_file>

E.g., python enumerate_candidates.py --split test --pred_file misc/grail_test_entity_linking.json.

This will write enumerated candidates to outputs/grail_<split>_candidates-ranking.jsonline.

(iv) Running Ranker

sh scripts/run_ranker.sh predict <model_path> <split>

E.g., sh scripts/run_ranker.sh predict checkpoints/grail_bert_ranking test

This will write prediction candidate logits (the logits of each candidate for each example) to misc/grail_<split>_candidates_logits.bin, and prediction result (in original GrailQA prediction format) to misc/grail_<split>_ranker_results.txt

You may evaluate the ranker results by python grail_evaluate.py <path_to_data_split> <path_to_predictions>

E.g., python grail_evaluate.py outputs/grailqa_v1.0_dev.json misc/grail_dev_ranker_results.txt

(v) Running Generator

First, make prepare generation model inputs

python make_generation_dataset.py --split <split> --logit_file <pred_file>

E.g., python make_generation_dataset.py --split test --logit_file misc/grail_test_candidate_logits.bin.

This will read the canddiates and the use logits to select top-k candidates and write generation model inputs to outputs/grail_<split>_gen.json.

Second, run generation model to get the top-k prediction

sh scripts/run_gen.sh predict <model_path> <split>

E.g., sh scripts/run_gen.sh predict checkpoints/grail_t5_generation test.

This will generate top-k decoded logical forms stored at misc/grail_<split>_topk_generations.json.

(vi) Final Inference Steps

Having the decoded top-k predictions, we'll go down the top-k list, execute the logical form one by one until we find one logical form return valid answers.

python eval_topk_prediction.py --split <split> --pred_file <pred_file>

E.g., python eval_topk_prediction.py --split test --pred_file misc/grail_test_topk_generations.json

prediction result (in original GrailQA prediction format) to misc/grail_<split>_final_results.txt.

You can then use official GrailQA evaluate script to run evaluation

python grail_evaluate.py <path_to_data_split> <path_to_predictions>

E.g., python grail_evaluate.py outputs/grailqa_v1.0_dev.json misc/grail_dev_final_results.txt


Training Models

We already attached pretrained-models ready for running inference. If you'd like to train your own models please checkout the README at /GrailQA folder.

Reproducing the Results on WebQSP

Please use /WebQSP as the working directory when running experiments on WebQSP.


Prepare dataset and pretrained checkpoints

Dataset

Please download the WebQSP dataset and put the them under outputs so as to organize dataset as outputs/WebQSP.train[test].json.

Evaluation Script

Please make a copy of the official evaluation script (eval/eval.py in the WebQSP zip file) and put the script under this directory (WebQSP) with the name legacy_eval.py.

Model Checkpoints

Please download the following checkpoints for candidate ranking, and augmented generation checkpoints, unzip and put them under checkpoints/ directory

KB Cache

Please download the cache file for webqsp, unzip and put them under cache/ so that we have cache/webqsp-LinkedRelation.bin and cache/webqsp-TwoHopPath.bin in the place.


Running inference

(i) Parsing Sparql-Query to S-Expression

As stated in the paper, we generate s-expressions, which is not provided by the original dataset, so we provide scripts to parse sparql-query to s-expressions.

Run python parse_sparql.py, which will augment original dataset files with s-expressions and save them in outputs as outputs/WebQSP.train.expr.json and outputs/WebQSP.dev.expr.json. Since there is no validation set, we further randomly select 200 examples from the training set for validation, yielding ptrain split and pdev split.

(ii) Entity Detection and Linking using ELQ

This step can be skipped, as we've already include outputs of this step (misc/webqsp_train_elq-5_mid.json, outputs/webqsp_test_elq-5_mid.json).

The scripts and config of ELQ model can be found in elq_linking/run_elq_linker.py. If you'd like to use the script to run entity linking, please copy the run_elq_linker.py python script to ELQ model and run the script there.

(iii) Enumerating Logical Form Candidates

python enumerate_candidates.py --split test

This will write enumerated candidates to outputs/webqsp_test_candidates-ranking.jsonline.

(iv) Runing Ranker

sh scripts/run_ranker.sh predict checkpoints/webqsp_bert_ranking test

This will write prediction candidate logits (the logits of each candidate for each example) to misc/webqsp_test_candidates_logits.bin, and prediction result (in original GrailQA prediction format) to misc/webqsp_test_ranker_results.txt

(v) Running Generator

First, make prepare generation model inputs

python make_generation_dataset.py --split test --logit_file misc/webqsp_test_candidate_logits.bin.

This will read the candidates and the use logits to select top-k candidates and write generation model inputs to outputs/webqsp_test_gen.json.

Second, run generation model to get the top-k prediction

sh scripts/run_gen.sh predict checkpoints/webqsp_t5_generation test

This will generate top-k decoded logical forms stored at misc/webqsp_test_topk_generations.json.

(vi) Final Inference Steps

Having the decoded top-k predictions, we'll go down the top-k list, execute the logical form one by one until we find one logical form return valid answers.

python eval_topk_prediction.py --split test --pred_file misc/webqsp_test_topk_generations.json

Prediction result will be stored (in GrailQA prediction format) to misc/webqsp_test_final_results.txt.

You can then use official WebQSP (only modified in I/O) evaluate script to run evaluation

python webqsp_evaluate.py outputs/WebQSP.test.json misc/webqsp_test_final_results.txt.


Training Models

We already attached pretrained-models ready for running inference. If you'd like to train your own models please checkout the README at /WebQSP folder.

Citation

@inproceedings{ye2021rngkbqa,
    title={RnG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering}, 
    author={Xi Ye and Semih Yavuz and Kazuma Hashimoto and Yingbo Zhou and Caiming Xiong},
    year={2022},
    booktitle = "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)",
}

Questions?

For any questions, feel free to open issues, or shoot emails to

License

The code is released under BSD 3-Clause - see LICENSE for details.

rng-kbqa's People

Contributors

semihyavuzz avatar svc-scm avatar xiye17 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rng-kbqa's Issues

About training the ranker

--per_gpu_train_batch_size 1 \

Hi,
I would like to ask the batch size of the ranker (not entity disambiguation).
In the paper, the batch size is 8. However, the script here is 1.

Besides, when evaluating (predicting) the LF ranking, should the batch size be 1 according to BERTCandidateRanker:

# for testing, batch size have to be 1

But the script set the evaluation batch size to 128 as shown in,

--per_gpu_eval_batch_size 128 | tee "${exp_prefix}log.txt"

Are these two numbers have different meanings?
Would you please provide me with a clue? Thanks a lot.

it takes too long when I run 'run_disamb.sh'

It seems to take me more than ten days to run this program, I sincerely hope you can give me some help.

/2023 15:48:51 - WARNING - __main__ -   Process rank: -1, device: cuda, n_gpu: 1, distributed training: False
05/15/2023 15:49:39 - INFO - __main__ -   Training/evaluation parameters Namespace(adam_epsilon=1e-08, bootstrapping_start=None, bootstrapping_ticks=None, cache_dir='./hfcache', config_name='', data_dir=None, dataset='grail', device=device(type='cuda'), disable_tqdm=False, do_eval=True, do_lower_case=True, do_predict=True, do_train=False, eval_all_checkpoints=False, eval_steps=500, evaluate_during_training=False, gradient_accumulation_steps=1, learning_rate=5e-05, linear_method='vanilla', local_rank=-1, logging_steps=500, max_grad_norm=1.0, max_seq_length=96, max_steps=-1, model_name_or_path='checkpoints/grail_bert_entity_disamb', model_type='bert', n_gpu=1, no_cuda=False, num_contrast_sample=20, num_train_epochs=3.0, output_dir='results/disamb/grail_train', overwrite_cache=False, overwrite_output_dir=True, per_gpu_eval_batch_size=128, per_gpu_train_batch_size=8, predict_file='outputs/grail_train_entities.json', save_steps=500, seed=42, server_ip='', server_port='', threads=1, tokenizer_name='', train_file=None, training_curriculum='random', verbose_logging=False, warmup_ratio=0.0, warmup_steps=0, weight_decay=0.0)
05/15/2023 15:50:19 - INFO - __main__ -   Loading checkpoint checkpoints/grail_bert_entity_disamb for evaluation
05/15/2023 15:51:31 - INFO - __main__ -   Evaluate the following checkpoints: ['checkpoints/grail_bert_entity_disamb']
05/15/2023 15:51:52 - INFO - __main__ -   Creating features from dataset file at .
Read Exapmles:   1%|          | 367/44337 [2:24:05<389:09:16, 31.86s/it]

urllib.error.URLError

    PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
    PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
    PREFIX : <http://rdf.freebase.com/ns/> 
    SELECT (?x0 AS ?label) WHERE {
    SELECT DISTINCT ?x0  WHERE {
    :m.03_r3 rdfs:label ?x0 . 
                        FILTER (langMatches( lang(?x0), "EN" ) )
                         }
                         }

Reading: 0%|

except urllib.error.URLError:
    print(query)
    exit(0)

After throwing this exception, the program will exit directly. These three websites seem to be inaccessible. What is the situation? Is there an alternative if it is not accessible?

Question about knowledge base

Hi!

Great work! May I know what knowledge base are you using for this paper? Freebase? But ELQ is based on Wikidata right?

How is the entity linking F1 calculated?

Hi there,
I've noticed that in the paper appendix, there's an entity linking evaluation.
I would like to ask how does the linking F1 gets calculated? Is the evaluation in this repo?

Since there're some questions in GrailQA that have no golden entity, and it is definitely possible that the linking result is empty for a question, how does the F1 calculation handles these problems?

Thanks for your reply.

Training of Mention Detection Model (BERT-NER)

I would like to train the "mention detection model" on the GrailQA dataset, which can be done using /GrailQA/entity_linker/BERT_NER/run_ner.py. But it also expects GrailQA dataset in CoNLL-2003 format (i.e., question tokens tagged with ["B", "I", "O", "[CLS]", "[SEP]"] tags) which is not present in the repository.
So would you please share the processed GrailQA dataset (train and valid split) or script for converting the dataset into the required format?
Can you please confirm that the BERT-NER model has been trained with the parameters mentioned in /GrailQA/entity_linker/BERT_NER/run_ner.py ?

Question about WebQSP evaluation

Hi!

The WebQSP eval.py file will generate 2 F1 scores: "Average f1 over questions (accuracy)" and "F1 of average recall and average precision". May I know which one are you showing?

Thanks.

other dataset

Hi,
Congratulations on such interesting work. Existed research work always considered the CWQ and WebQSP datasets. However, you only test the WebQSP in your paper. I want to know: (1) if you have conducted the experiment on CWQ to verify your method performance; (2) if not, how to conduct the experiment on this new dataset? I think it is should similar to the WebQSP. Looking forward to your reply.

Something wrong with reproduction

I am very interested in your project and thank you for your code. but when I use Reproducing the Results on GrailQA/Step by Step Instructions/(ii) Disambiguating Entities (Entity Linking), it will stuck at entity disambiguating. I do not know what is going wrong, which seems something goes wrong when running the code: "logits = model(**inputs)[1]", can you help me?

1386193464

Format of the ranking candidates file for GrailQA

Hi. I am interested in the ranker part of this project. I am currently setting up the environment. However, looks like the previous steps could be time consuming. Can I get some quick information on the format of the output files for:

python enumerate_candidates.py --split train # we use gt entity for trainning (so no need for prediction on training)
python enumerate_candidates.py --split dev --pred_file misc/grail_dev_entity_linking.json

Thanks!

About knowledge graph

Hi,
Can you suggest how I can implement this KBQA model on my own knowledge graph dataset?
I have tried to load the RDF file into the virtuoso server directly, but the model seems not to detect the graph.
Do I need to do any configuration on my RDF file?
Or is it the problem related to entity linking? If so, what can I do to make the KBQA model work on my knowledge graph dataset?

Is there any release for query enumeration and ranking results?

Hi there,
I'm currently interested in reproducing the results of RnG-KBQA.
Dev set is relatively small.
It workedfine for me.
However, training set might be too big for enumerating candidates, maybe even a few days for this single step?
I wonder is there a way to share files for the enumeration of candidate queries in training set, and maybe the ranking results?

Thanks a lot.

How to process the mids which can't convert to string name?

Hi,
When I follow this excellent work, I encounter some problems: I can't convert some mids to the string type name. I use your get_name script and get some triples output. But there is not any helpful string name information. How should I process this problem? Thanks!


Here are some example of mids and their one-hop triples by executed search on the freebase:

m.0gxnnwp : [['m.0gxnnwp', 'http://www.w3.org/1999/02/22-rdf-syntax-ns#type', 'people.sibling_relationship'], ['m.0gxnnwp', 'type.object.type', 'people.sibling_relationship'], ['m.0gxnnwp', 'people.sibling_relationship.sibling', 'm.06w2sn5'], ['m.0gxnnwp', 'people.sibling_relationship.sibling', 'm.0gxnnwq']]
m.0855mj_ : [['m.0855mj_', 'http://www.w3.org/1999/02/22-rdf-syntax-ns#type', 'film.performance'], ['m.0855mj_', 'type.object.type', 'film.performance'], ['m.0855mj_', 'film.performance.actor', 'm.09l3p'], ['m.0855mj_', 'film.performance.film', 'm.062zjtt'], ['m.0855mj_', 'film.performance.character', 'm.0dttll']]
m.04g55p8: [['m.04g55p8', 'http://www.w3.org/1999/02/22-rdf-syntax-ns#type', 'common.topic'], ['m.04g55p8', 'http://www.w3.org/1999/02/22-rdf-syntax-ns#type', 'user.dfhuynh.default_domain.assassination'], ['m.04g55p8', 'type.object.type', 'common.topic'], ['m.04g55p8', 'type.object.type', 'user.dfhuynh.default_domain.assassination'], ['m.04g55p8', 'user.dfhuynh.default_domain.assassination.assassinated_person', 'm.0d3k14'], ['m.04g55p8', 'user.dfhuynh.default_domain.assassination.assassin', 'm.0bgl08'], ['m.04g55p8', 'user.dfhuynh.default_domain.assassination.date', '1960-12-11-08:00'], ['m.04g55p8', 'user.dfhuynh.default_domain.assassination.location', 'm.0rqf1'], ['m.04g55p8', 'user.dfhuynh.default_domain.assassination.method', 'm.04g56gm'], ['m.04g55p8', 'user.dfhuynh.default_domain.assassination.outcome', 'm.04g5679']]

Experiment Reproduction Help

I deployed the environment as instructed. The result obtained when testing the demo script is only 0.72, which is lower than the expected 0.86.
In addition, when I did the follow-up test, the 32g ram of the computer did not seem to be enough. When I used ranker to test the dev data, the process would be killed.
Could you please give me some advice to reproduce the results?

Why the space occupied by virtuoso.db gradually becomes larger?

Hi,
Excuse me. According to your guidelines for configuring the Freebase environment, I have downloaded the virtuoso.db and configured the environment. After successfully running the virtuoso.py to start the service of virtuoso, I use the SPARQLWrapper to query the freebase, which is written in your code. And I find, after some query operation, the desk space occupied by the virtuoso.db file has increased from ~50G (the initial downloaded file) to ~140G. Using the command ll, I find the latest change time for this file is updated for the latest time, which means it is modified along with the using process. But I don't know why this happened? Is there some query cache? And how should I solve this problem? I hope you can help me.
I am looking forward to your reply.

Thanks!
jinhao.

Is it possible to provide results of entity linking?

RnG-KBQA's entity linking are a further enhancement to GrailQA implementation.
Currently, entity linking is a relatively independent step, and significantly improving the effectiveness of entity linking is usually difficult.
Would the authors be willing to publish the results of entity linking?

This may be of great help for subsequent studies.
Many thanks.

version of transformers

Hi Xi,

Thanks for open-source this awesome work!
May I know what's the version you install for transformers? I saw pytorch-transformers==1.1.0 in requirements.txt but I have a package missing problem when importing transformers in run_ranker.py and run_generator.py. I tried transformers==3.4.0 and transformers==4.16.0 but they all have some problems.

WebQSP reproduction help

Hi~!
I have encountered some problems trying to reproduce your great work on the WebQSP dataset. I have meticulously followed the steps in the README.md, but the final results show significant discrepancies, as shown in the attached images. Could you please point out the most likely causes of the error? If you need more detailed information about my replication environment, please let me know. Thank you very much for your help.
Ex
result

GrailQA retrain ranker, use specific gpus

Hi! When I run python enumerate_candidates.py --split train , it seems that it runs well when training.
But when the process step into evaluation, I found that all 8 gpus are used. What should I do to use specific 2 or 3 specific gpus?
I designated the device_ids in code, but it seems that it didn't work.
Thank you!

How long does enumerate candidates need?

Hello,

I was wondering how long it will take to get the candidates for train data? I have been running it for 4 hours on a single cpu. If i understand it correctly, the most time-consuming part is edit distance part. Would you like to provide the results for training. Thanks in advance!

Best,
Haishuo

Something wrong with reproduction

when I use walk_through_demo.sh, it will stuck at entity disambiguating. I do not know what is going wrong, which seems can not read examples:
image

could you help me @xiye17 ? thx!

the file in ontology

Thank you very much for your code sharing, I don't quite understand what these files in ontology mean?
domain_dict, domain_info ,fb_roles ,fb_types ,full_reverse_properties.json ,reverse_properties do they have any connection with FBbase?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.