Giter Club home page Giter Club logo

dca's Introduction

DCA

DCA (Dynamic Context Augmentation) provides global entity linking models featuring:

  • Efficiency: Comparing to global entity linking models, DCA only requires one pass through all mentions, yielding better efficiency in inference.

  • Portability: DCA can introduce topical coherence into local linking models without reshaping their original designs or structures.

Remarkablely, our DCA models (trained by supervised learning or reinforcement learning) achieved:

  • 94.64% in-KB acc. on AIDA-CONLL testset (AIDA-B).
  • 94.57% F1 score on MSBNC dataset and 90.14% F1 score on ACE2004 dataset.

Details about DCA can be accessed at: https://arxiv.org/abs/1909.02117.

This implementation refers to the project structure of mulrel-nel.

Written and maintained by Sheng Lin ([email protected]) and Xiyuan Yang ([email protected]).

Overall Workflow

Alt Text

Data

Download data from here and unzip to the main folder (i.e. your-path/DCA).

The above data archive mainly contains the following resource files:

  • Dataset: One in-domain dataset (AIDA-CoNLL) and Five cross-domain datasets (MSNBC / AQUAINT / ACE2004 / CWEB / WIKI). And these datasets share the same data format.

  • Mention Type: Adopted to compute type similarity between mention-entity pairs. We predict types for each mention in datasets using a typing system called NFETC model trained by the AIDA dataset.

  • Wikipedia inLinks: Surface names of inlinks for a Wikipedia page (entity) are used to construct dynamic context in our model learning process.

  • Entity Description: Wikipedia page contents (entity description) are used by one of our base model -- Berkeley-CNN

Installation

Requirements: Python 3.5 or 3.6, Pytorch 0.3, CUDA 7.5 or 8

Important Parameters

mode: train or eval mode.

method: training method, Supervised Learning (SL) or Reinforcement Learning (RL)

order: three decision orders -- offset / size / random. Please refer to our paper for their concrete definition.

n_cands_before_rank: the number of candidates.

tok_top_n4inlink: the number of inlinks for a Wikipedia page (entity) would be considered as candidates for the dynamic context.

tok_top_n4ent: the number of inlinks for a Wikipedia page (entity) would be added into the dynamic context.

isDynamic: 2-hop DCA / 1-hop DCA / without DCA. Corresponding to the experiments of Table 4 in our paper.

dca_method: soft+hard attention / soft attention / average sum. Corresponding to the experiments of Table 5 in our paper.

Running

cd DCA/

export PYTHONPATH=$PYTHONPATH:../

Supervised Learning: python main.py --mode train --order offset --model_path model --method SL

Reinforcement Learning: python main.py --mode train --order offset --model_path model --method RL

Citation

If you find the implementation useful, please cite the following paper: Learning Dynamic Context Augmentation for Global Entity Linking.

@inproceedings{yang2019learning,
  title={Learning Dynamic Context Augmentation for Global Entity Linking},
  author={Yang, Xiyuan and Gu, Xiaotao and Lin, Sheng and Tang, Siliang and Zhuang, Yueting and Wu, Fei and Chen, Zhigang and Hu, Guoping and Ren, Xiang},
  booktitle = {Proceedings of EMNLP-IJCNLP},
  year={2019}
}

dca's People

Contributors

youngxiyuan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

dca's Issues

Results in paper

Hi,how were the results in the paper obtained?
The maximum value of the sum of each batch or the maximum value of each dataset?

Why are existing candidates dropped in `find_coref`?

Hello,

I am trying to understand the with_coref and find_coref functions in the dataset loader. Roughly speaking, it appears that the goal of find_coref is to do the following (in pseudo-code):

find_coref(cur_m) :=
for each mention m in the same document as cur_m:
  if m's mention text starts or ends with the same text as cur_m BUT not equal to cur_m:
    add all of m's candidates to the result list (removing duplicates)
return the collected candidates

The results of find_coref are then used to overwrite cur_m's candidate list. This is a bit confusing to me, though, since the BUT ... above means that the candidates which were previously inside of cur_m's candidate list are lost (or at least potentially lost). Is this intentional? If so, can you explain what with_coref is intended to accomplish?

For example, on a local modification of this repository, I found that the gold entity (Teresa) is dropped from the list of candidates (I've verified in the AIDA train CSV [line 2426] that this is indeed the correct gold entity for this mention):

RuntimeError: Failed to find gold_key 'Teresa' in list: [(0, ('Mother_Teresa', 1.0)), (1, ('Mother_Teresa_High_School', 0.001)), (2, ('The_Missionary_Position', 0.001)), (3, ('Blessed_Mother_Teresa_Catholic_Secondary_School', 0.0))]
orig list: [['Teresa', 0.364], ['Teresa_(Barbie)', 0.138], ['Teresa,_Rizal', 0.115], ['Teresa_Nielsen_Hayden', 0.103], ['Teresa_of_Ávila', 0.092], ['Teresa_Heinz', 0.038], ['Teresa,_Castellón', 0.031], ['Teresa,_Greater_Poland_Voivodeship', 0.029], ['Mother_Teresa', 0.026], ['Teresa_Scanlan', 0.021], ['Teresa_Teng', 0.018], ['Theresa,_Countess_of_Portugal', 0.018], ['George_McGovern', 0.015], ['Teresa_Crippen', 0.013], ['Teresa_Palmer', 0.012], ['Teresa_Cristina_of_the_Two_Sicilies', 0.01], ['Teresa_Earnhardt', 0.01], ['Teresa_Wynn_Roseborough', 0.009], ['Teresa_(2010_telenovela)', 0.009], ['The_Real_Housewives_of_New_Jersey', 0.008], ['Teresa_(film)', 0.008], ['Teresa_Jungman', 0.008], ['Teresa_Bagioli_Sickles', 0.007], ['Teresa_Fernández_de_Traba', 0.007], ['Teresa_Bryant', 0.007], ['Teresa,_Contessa_Guiccioli', 0.007], ['Teresa_Strasser', 0.006], ['Teresa_Vaill', 0.006], ['Teresa_Mak', 0.006], ['Teresa_Murphy', 0.006], ['Teresa_Cheung_(actress)', 0.006], ['Teresa_Rivera', 0.006], ['Teresa_Nzola_Meso_Ba', 0.006], ['Tracy_Bond', 0.006], ['Teresa_Medina', 0.006], ['Infanta_Maria_Teresa_of_Spain', 0.006], ['Teresa_Seiblitz', 0.006], ['Teresa_Forcier', 0.006], ['Teresa_Taylor', 0.006], ['Teresa_Motos', 0.006], ['Teresa_Piotrowska', 0.006], ['Teresa_Ferster_Glazier', 0.006], ['Teresa_Fedor', 0.006], ['Teresa_Ganzel', 0.006], ['Teresa_Portela_(Portuguese_canoeist)', 0.006], ['Teresa_de_la_Parra', 0.006], ['Teresa_Piccini', 0.006], ['Teresa_Borawska', 0.006], ['Princess_Maria_Teresa_of_Savoy', 0.006], ['Teresa_Roncon', 0.006], ['Teresa_Wentzler', 0.006], ['Teresa_Machado', 0.006], ['Teresa_Magbanua', 0.006], ['Teresa_del_Po', 0.006], ['Teresa_Sapieha', 0.006], ['Teresa_Edwards', 0.006], ['Teresa_A._Dolan', 0.006], ['Teresa_Hurtado_de_Ory', 0.006], ['Teresa_De_Sio', 0.006], ['Teresa_Hsu_Chih', 0.006], ['Lady_Teresa_Waugh', 0.006], ['Teresa_Lourenco', 0.006], ['Teresa_Lubomirska', 0.006], ['Teresio_Maria_Languasco', 0.006], ['Teresa_Woo-Paw', 0.006], ['Teresa_de_Cartagena', 0.006], ['Teresa_Bernabe', 0.006], ['Teresa_Amabile', 0.006], ['Maria_Teresa,_Princess_of_Beira', 0.006], ['Teresa_Korwin_Gosiewska', 0.006], ['Teresa_Bright', 0.006], ['Teresa_Daly', 0.006], ['Teresa_Villaverde', 0.006], ['Teresa_Stich-Randall', 0.006], ['Teresa_Polias', 0.006], ['Teresa_Wong', 0.006], ['Teresa_Pavlinek', 0.006], ['Teresa_Ruiz_(politician)', 0.006], ['Teresa_Cooper', 0.006], ['Teresa_Carr_Deni', 0.006], ['Teresa_P._Pica', 0.006], ['Teresa_S._Polley', 0.006], ['Teresa_Stratas', 0.006], ['Teresa_Lipowska', 0.006], ['Teresa_Carpio', 0.006], ['Teresa_Stolz', 0.006], ['Teresa_Wilson', 0.006], ['Teresa_Lalor', 0.006], ['Teresa_Hannigan', 0.006], ['Teresa_Chodkiewicz', 0.006], ['Teresa_Lisbon', 0.006], ['Teresa_Forn', 0.006], ['Teresa_Gutierrez', 0.006], ['Teresa_Maxwell-Conover', 0.006], ['Teresa_Ann_Savoy', 0.006], ['Teresa_Trull', 0.006], ['Teresa_Forcades', 0.006], ['Teresa_Lynch', 0.006], ['Teresa_Furtado', 0.006], ['Teresa_Southwick', 0.006]]

Any help on understanding this would be very useful. Thanks!

Maybe ent_inlink mistakes?

Hi! When I used your data, I found that some entities which should be related, such as 'Cambodia_national_football_team' and 'Football_Federation_of_Cambodia' have no link to each other, but they both have link with 'Shrewsbury,_Pennsylvania'. That's strange, and I found more strange links and un-links when I tried to build a graph with them.
I used the entityid_dictid_inlinks_uniq.pkl. I assumed that the dict means a relationship with an entity whose id is the key and an entity whose id is in the values. Had I made a mistake, or the data?

Why is the loss always negative?

Hi, Thanks for your work.
I run your code with the default arguments by following your command in the readme.
Reinforcement Learning: python main.py --mode train --order offset --model_path model --method RL.
Why is the loss always negative?

[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22]
epoch 0 total loss -5881.828728429389 -6.171908424375015
1906
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35]
epoch 1 total loss -5041.10081607045 -5.289717540472664
2859
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25]
epoch 2 total loss -4401.4143905708315 -4.6184830960869165
3812
[0, 1, 2, 3, 4, 5, 6, 7]
epoch 3 total loss -3761.1239844140346 -3.9466148839601622
4765
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21]
epoch 4 total loss -3198.0963033663556 -3.3558198356415065
aida-A micro F1: 0.7510698256966912
aida-B micro F1: 0.7348940914158304
msnbc micro F1: 0.8967100229533282
aquaint micro F1: 0.8041958041958042
ace2004 micro F1: 0.841046277665996
clueweb micro F1: 0.6639417894358606
wikipedia micro F1: 0.6103098883218696
att_mat_diag tensor(17.4249, device='cuda:0')
tok_score_mat_diag tensor(17.3656, device='cuda:0')
ment_att_mat_diag tensor(17.3205, device='cuda:0')
ment_score_mat_diag tensor(17.3205, device='cuda:0')
entity2entity_mat_diag tensor(17.3725, device='cuda:0')
entity2entity_score_mat_diag tensor(17.4487, device='cuda:0')
knowledge2entity_mat_diag tensor(17.2840, device='cuda:0')
knowledge2entity_score_mat_diag tensor(17.3483, device='cuda:0')
ment2ment_mat_diag tensor(17.3205, device='cuda:0')
ment2ment_score_mat_diag tensor(17.3205, device='cuda:0')
f - l1.w, b tensor(5.8933, device='cuda:0') tensor(2.7462, device='cuda:0')
f - l2.w, b tensor(0.6788, device='cuda:0') tensor(0.0029, device='cuda:0')
5718
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
epoch 5 total loss -2894.0255648259754 -3.036752953647403
6671
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99]
epoch 6 total loss -2617.967075814433 -2.7470798277171387
7624
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
epoch 7 total loss -2439.785110020892 -2.560110293830946
8577
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
epoch 8 total loss -2333.8847663235483 -2.4489871629837863
9530
[0, 1, 2, 3, 4]
epoch 9 total loss -1700.5407024365093 -1.784407872441248
aida-A micro F1: 0.7527397975159169
aida-B micro F1: 0.7409141583054627
msnbc micro F1: 0.9074215761285387
aquaint micro F1: 0.8167832167832169
ace2004 micro F1: 0.8450704225352113
clueweb micro F1: 0.673913043478261
wikipedia micro F1: 0.6227350048073367
att_mat_diag tensor(17.6288, device='cuda:0')
tok_score_mat_diag tensor(17.5042, device='cuda:0')
ment_att_mat_diag tensor(17.3205, device='cuda:0')
ment_score_mat_diag tensor(17.3205, device='cuda:0')
entity2entity_mat_diag tensor(17.6331, device='cuda:0')
entity2entity_score_mat_diag tensor(17.6095, device='cuda:0')
knowledge2entity_mat_diag tensor(17.3107, device='cuda:0')
knowledge2entity_score_mat_diag tensor(17.3347, device='cuda:0')
ment2ment_mat_diag tensor(17.3205, device='cuda:0')
ment2ment_score_mat_diag tensor(17.3205, device='cud

self.ent_inlinks = config['entity_inlinks'] KeyError: 'entity_inlinks

python main.py --mode train --order offset --model_path model --method RL
是可以执行成功的。
但是eval 的时候报错。

python main.py --mode eval --order offset --model_path model --method RL

root@:/data/DCA$ ~/.conda/envs/keras_bert/bin/python main.py --mode eval --order offset --model_path model --method RL
load conll at ../data/generated/test_train_data
load csv
370United News of India
process coref
load conll
reorder mentions within the dataset
create model
--- create EDRanker model ---
prerank model
--- create NTEE model ---
--- create AbstractWordEntity model ---
main model
try loading model from model
--- create MulRelRanker model ---
--- create LocalCtxAttRanker model ---
--- create AbstractWordEntity model ---
Traceback (most recent call last):
File "main.py", line 204, in
ranker = EDRanker(config=config)
File "../DCA/ed_ranker.py", line 52, in init
self.model = load_model(self.args.model_path, ModelClass)
File "../DCA/abstract_word_entity.py", line 28, in load
model = model_class(config)
File "../DCA/mulrel_ranker.py", line 32, in init
self.ent_inlinks = config['entity_inlinks']

data 目录下是这样的:
ll data
total 135990
drwxrwxr-x 1 mqq mqq 108039782 Nov 6 2018 basic_data
drwxrwxr-x 1 mqq mqq 6472491 Nov 6 2018 data
-rw-rw-r-- 1 mqq mqq 1245609 Aug 20 2019 doc2type.pkl
-rw-rw-r-- 1 mqq mqq 120266616 May 18 2019 ent2desc.json
-rw-rw-r-- 1 mqq mqq 9864329 Aug 20 2019 entity2type.pkl
-rw-rw-r-- 1 mqq mqq 7869649 Aug 20 2019 entityid_dictid_inlinks_uniq.pkl
drwxrwxr-x 1 mqq mqq 2251141855 Jan 23 2019 generated
-rw-rw-r-- 1 mqq mqq 5862 Aug 20 2019 stopwords-multi.txt
-rw-rw-r-- 1 mqq mqq 82 Aug 20 2019 symbols.txt
drwxrwxr-x 1 mqq mqq 21420384 Nov 6 2018 test_data
drwxrwxr-x 1 mqq mqq 898623014 Nov 6 2018 word_ent_embs

麻烦看下,谢谢。

Please provide detailed parameters to reproduce the results in Table 2 and 3 in the paper

Hi,

Thanks for your work.
I run your code with the default arguments by following your command in the readme.

Supervised Learning: python main.py --mode train --order offset --model_path model --method SL

Reinforcement Learning: python main.py --mode train --order offset --model_path model --method RL

After running for 500 epoches, I got results which is much lower than the paper's report results.
Specially, in SL setting:
best_aida_A_rlts [['aida-A', 0.9149358104581986], ['aida-B', 0.9279821627647714], ['msnbc', 0.9303749043611323], ['aquaint', 0.8433566433566434], ['ace2004', 0.8772635814889336], ['clueweb', 0.7220625224577794], ['wikipedia', 0.7024628355890836]]

In RL setting:
best_aida_A_rlts [['aida-A', 0.911178373864941], ['aida-B', 0.9208472686733556], ['msnbc', 0.9319051262433052], ['aquaint', 0.8657342657342657], ['ace2004', 0.8812877263581488], ['clueweb', 0.7107438016528925], ['wikipedia', 0.7360402337105243]]

The default order is offset, but in Fig.3 DCA-SL offset should be 94.35, while DCA-RL should be 93.70 on AIDA-B.

So could you please provide the full command including detailed arguments setting to reproduce the results?

Thanks a lot.
Looking forward to your reply.

Source of entity NER classes

Hi! First of all, thank you for sharing your very interesting work!

I have a simple question: how did you generate the file "./data/entity2type.pkl"?
This is essentially the same as asking: how did you determine the NER class (PER, ORG, LOC or UNK) best suited for each entity?

Thanks!

Error after 55th epoch while saving the model

The model is running fine for 55 epochs and but at the 55th epoch when it is to be saved it is throwing an error as below. Any idea about this error?

epoch 54 total loss 0.2056220420028012 0.00021576289821909886
aida-A micro F1: 0.9303830497860348
aida-B micro F1: 0.9400222965440357
msnbc micro F1: 0.9426166794185157
aquaint micro F1: 0.8755244755244754
ace2004 micro F1: 0.8933601609657947
clueweb micro F1: 0.742094861660079
wikipedia micro F1: 0.7821906663708305
change learning rate to 0.0001
att_mat_diag
tok_score_mat_diag
entity2entity_mat_diag
entity2entity_score_mat_diag
knowledge2entity_mat_diag
knowledge2entity_score_mat_diag
type_emb
cnn.weight
cnn.bias
score_combine.0.weight
score_combine.0.bias
score_combine.3.weight
score_combine.3.bias
save model to model
Traceback (most recent call last):
File "main.py", line 226, in
ranker.train(conll.train, dev_datasets, config)
File "/content/drive/My Drive/data.tar.gz (Unzipped Files)/DCA/ed_ranker.py", line 1032, in train
self.model.save(self.args.model_path)
File "/content/drive/My Drive/data.tar.gz (Unzipped Files)/DCA/abstract_word_entity.py", line 78, in save
json.dump(config, f)
File "/usr/lib/python3.6/json/init.py", line 179, in dump
for chunk in iterable:
File "/usr/lib/python3.6/json/encoder.py", line 430, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/usr/lib/python3.6/json/encoder.py", line 404, in _iterencode_dict
yield from chunks
File "/usr/lib/python3.6/json/encoder.py", line 437, in _iterencode
o = _default(o)
File "/usr/lib/python3.6/json/encoder.py", line 180, in default
o.class.name)
TypeError: Object of type 'set' is not JSON serializable

TypeError: Object of type 'Tensor' is not JSON serializable

Hi,

We have successfully trained the model, but we got a problem while saving the model to a file. The exception is as below:

epoch 54 total loss 0.20950795884999707 0.00021984046049317634
aida-A micro F1: 0.9299655568312285
aida-B micro F1: 0.9415830546265329
msnbc micro F1: 0.9395562356541699
aquaint micro F1: 0.8797202797202797
ace2004 micro F1: 0.8853118712273641
clueweb micro F1: 0.7417355371900826
wikipedia micro F1: 0.7820427483174321
change learning rate to 0.0001
att_mat_diag
tok_score_mat_diag
entity2entity_mat_diag
entity2entity_score_mat_diag
knowledge2entity_mat_diag
knowledge2entity_score_mat_diag
type_emb
cnn.weight
cnn.bias
score_combine.0.weight
score_combine.0.bias
score_combine.3.weight
score_combine.3.bias
save model to model/
Traceback (most recent call last):
File "main.py", line 225, in
ranker.train(conll.train, dev_datasets, config)
File "../DCA/ed_ranker.py", line 1032, in train
self.model.save(self.args.model_path)
File "../DCA/abstract_word_entity.py", line 78, in save
json.dump(config, f)
File "/usr/lib64/python3.6/json/init.py", line 179, in dump
for chunk in iterable:
File "/usr/lib64/python3.6/json/encoder.py", line 430, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/usr/lib64/python3.6/json/encoder.py", line 404, in _iterencode_dict
yield from chunks
File "/usr/lib64/python3.6/json/encoder.py", line 325, in _iterencode_list
yield from chunks
File "/usr/lib64/python3.6/json/encoder.py", line 437, in _iterencode
o = _default(o)
File "/usr/lib64/python3.6/json/encoder.py", line 180, in default
o.class.name)
TypeError: Object of type 'Tensor' is not JSON serializable

Previously we got the same issue as #7 , but we followed the instruction and got that fixed.

We hope you can assist us with this issue :-)

Unable to run on google colab

I am trying to run the code on google colab. CUDA exits with error: CUDA out of memory. Could you please help me which parameters could be changed for this error.

Result:

load conll at ../data/generated/test_train_data
load csv
370United News of India
process coref
load conll
reorder mentions within the dataset
create model
tcmalloc: large alloc 1181786112 bytes == 0xb04c000 @ 0x7efca71911e7 0x7efca15535e1 0x7efca15bc90d 0x7efca15bd522 0x7efca1654bce 0x50a7f5 0x50cfd6 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4 0x507f24 0x50b053 0x634dd2 0x634e87 0x63863f 0x6391e1 0x4b0dc0 0x7efca6d8eb97 0x5b26fa
--- create EDRanker model ---
prerank model
--- create NTEE model ---
--- create AbstractWordEntity model ---
main model
create new model
--- create MulRelRanker model ---
--- create LocalCtxAttRanker model ---
--- create AbstractWordEntity model ---
^C

TypeError: predict() missing 2 required positional arguments: 'dynamic_option' and 'order_learning'

Issue:

Running the following command:

python main.py --mode eval --order offset --model_path 'Model/model' --method SL

Raises the following error:

TypeError: predict() missing 2 required positional arguments: 'dynamic_option' and 'order_learning'

Is this the correct fix?

As I can see it, the main.py script should have the following line.

predictions = ranker.predict(data, args.isDynamic, ranker.model.order_learning)
  1. Would this be the correct way to run the main script for eval?
  2. What is order_learning ?

AttributeError: 'EDRanker' object has no attribute 'rt_flag'

Hi, I'm trying to run the main script:
python main.py --mode eval --order offset --model_path 'Model/model' --method SL
Running Eval raises the following error
AttributeError: 'EDRanker' object has no attribute 'rt_flag'

I notice that rt_flag is somehow associated with 'aida-B' in the dev_datasets.

  1. What exactly is rt_flag supposed to indicate?
  2. What is the downstream effect of rt_flag and would it allow us to apply DCA on a custom dataset?

Thank you

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.