Giter Club home page Giter Club logo

text2event's People

Contributors

luyaojie avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

text2event's Issues

没有event.schema

您好,在readme中提到,运行“bash scripts/processing_data.bash”将自动得到train/dev/test JSON files and event.schema file,我运行之后得到了train_convert/dev_convert/test_convert.json 但是没有event.schema,请问这个文件是自己构建吗?

关于ACE05数据集

作者你好,我发现ACE05数据集中大半的句子都没有事件信息,对于这些句子需要进行训练吗?如果不训练的话,最终效果是否会下降?

About the detailed training arguments

Bravo! I've read the paper and the code in detail. I evaluated the downloaded dyiepp_ace2005_en_t5_large model and got test_trigger-F1 = 72.7273 and test_role-F1 = 55.0042 . You've really done an outstanding work!
BTW, could you please tell me how to get the detailed training arguments? I god a mess when I open this file: dyiepp_ace2005_en_t5_large\training_args.bin

Get worse result on test set while training with the proposed curriculum learning algorithm

Hi, I evaluate t5-base, training with the proposed curriculum learning algorithm, on the dataset namely ACE05-EN+ and get worse test result:
test_trigger-F1 = 65.9436
test_trigger-P = 61.0442
test_trigger-R = 71.6981
test_role-F1 = 49.6
test_role-P = 45.8693
test_role-R = 53.9913
While evaluating the model trained without curriculum learning algorithm, I get:
test_trigger-F1 = 68.8863
test_trigger-P = 67.1141
test_trigger-R = 70.7547
test_role-F1 = 49.0647
test_role-P = 48.6448
test_role-R = 49.492

The performance drops when training with the curriculum learning algorithm. The test-trigger-P drops much if +CL.
Is there anything wrong?
Here is my training args:
epoch: 5+30, batch_size=32, metric_for_best_model=eval_role_F1, label_smoothing=0.2, model: t5-base, dataset: ACE05-EN+
Looking forward for your reply, thank you!

Some questions about constrained decoding

Hello, Mr. Lu. In the constraint decoding algorithm, there is a judgment that is not clear. Can you help explain it?

def check_state(self, tgt_generated):
        if tgt_generated[-1] == self.tokenizer.pad_token_id:
            return 'start', -1

Here,tgt_generated[-1]==self.tokenizer.pad_token_idmeansstart,Why?Can we substitute decoder_start_token_id for self.tokenizer.pad_token_id?Or just use the value 0?

In my opinion, if tgt_generated[-1] == self.tokenizer.pad_token_id,It means that the last one is pad_token, so the generation enters the end phase instead of the start phase.So judge the start of generation with decoder_ start_ token_ id is recommended, is it right?

question

请问下作者大大,debug时发现extraction.utils里的函数没有调用,需要怎么修改可以从库函数直接跳到自己写的函数

Dataset links for ACE05-EN+ and ERE-EN

Hi,

Thanks for open sourcing the tool!

I was wondering could you direct me to the links to the ACE05-EN+ and ERE-EN datasets that you've used?
I found the LDC page for the ACE2005 dataset but not for the other two.

Thanks!

准确率为0

您好,我按照代码中的数据格式建了自己的中文数据集,运行了一下模型发现准确率,召回率都是0,一直找不出原因,预训练模型也已经换成了mt5-base,不知道问题出在哪里,想请您解答一下疑惑

Cannot find t5-base model

Hi,
I met problem when executing the training script bash run_seq2seq_verbose.bash -d 0 -f tree -m t5-base --label_smoothing 0 -l 1e-4 --lr_scheduler linear --warmup_steps 2000 -b 16 , how can I solve this problem?
Below is the error message

404 Client Error: Not Found for url: https://mirrors.tuna.tsinghua.edu.cn/hugging-face-models/t5-base-pytorch_model.bin
Traceback (most recent call last):
File "/opt/conda/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1030, in from_pretrained
resolved_archive_file = cached_path(
File "/opt/conda/lib/python3.8/site-packages/transformers/file_utils.py", line 1134, in cached_path
output_path = get_from_cache(
File "/opt/conda/lib/python3.8/site-packages/transformers/file_utils.py", line 1300, in get_from_cache
r.raise_for_status()
File "/opt/conda/lib/python3.8/site-packages/requests/models.py", line 941, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://mirrors.tuna.tsinghua.edu.cn/hugging-face-models/t5-base-pytorch_model.bin

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "run_seq2seq.py", line 750, in
main()
File "run_seq2seq.py", line 399, in main
model = AutoModelForSeq2SeqLM.from_pretrained(
File "/opt/conda/lib/python3.8/site-packages/transformers/models/auto/modeling_auto.py", line 1301, in from_pretrained
return MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING[type(config)].from_pretrained(
File "/opt/conda/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1046, in from_pretrained
raise EnvironmentError(msg)
OSError: Can't load weights for 't5-base'. Make sure that:

  • 't5-base' is a correct model identifier listed on 'https://huggingface.co/models'

  • or 't5-base' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt.

Token not shown in original sentence

Hi,
I tried to use Text2Event in Chinese corpus, and your model have great performance in my task.

However, I got some weird result when I try to predict the event in the sentence.
The weird result shown as below:

Original sentence:
上 回 我 們 說 到 曹 操 帶 了 百 萬 大 軍 , 乘 一 千 艘 戰 船 去 攻 打 東 吳 。
Result:
{'roles': [('Attack', 'Attacker', '曹 操'), ('Attack', 'Target', '西 吳')], 'type': 'Attack', 'trigger': '攻 打'}

Most result is correct, but word "東 吳" was turn into "西 吳".

Does this result happen in English corpus? And how can I fix it?

Extra tokens in input file

Hi, I'm confused about the <extra_id_0> and <extra_id_1> in json files under the data/text2tree folder.

In Huggingface's example, these extra_ids represent to their replaced token, such as
The <extra_id_0> walks in <extra_id_1> park and <extra_id_0> cute dog <extra_id_1> the <extra_id_2>.

But in json file, most event label contains multiple <extra_id_0> and <extra_id_1>, such as
"text": "He will blow a city off the earth in a minute if he can get the hold of the means to do it .", "event": "<extra_id_0> <extra_id_0> Attack blow <extra_id_0> Place earth <extra_id_1> <extra_id_1> <extra_id_1>"

<extra_id_0> and <extra_id_1> also appears when there is no event label:
"text": "I am shook over the aftermath .", "event": "<extra_id_0> <extra_id_1>"

so what are the <extra_id_0> and <extra_id_1> mean?

A BUG of computing F1

Thanks for the code. But I think there are some bugs when computing F1. In your code, the predicted list, take argument extraction for example, is [(type1, role1, argument1), ...]. However, it does not consider instance_id and different instances may share the same (type1, role1, argument1), which causes more true predictions. This bug will make the final evaluation metrics higher than normal. Or maybe I misunderstand your code. Wish for your reply.

Tokenzier与Model分别设置时出现问题

陆博士,您好,我将模型的Tokenizer部分替换为了XLMRobertaTokenizer,其它没有变。每次训练到第500条数据时报错。换其他的Tokenizer也是一样,难道T5的模型只能和T5Tokenizer一起使用吗?看过您新发布的UIE系统,中文的Tokenizer是以BertTokenizer改写的,我也尝试按照这种方式进行,也失败了。调试了一周没有头绪,下面是改动部分和报错信息,错误的原因可能在哪里?我应该从哪里排查?当Tokenizermodel不是同一个模型时应该注意什么?谢谢陆博士~

只改动了下面调用Tokenizer的部分

tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base",bos_token=None,eos_token='</s>',unk_token='<unk>',pad_token='<pad>',cls_token=None,mask_token=None)

报错信息:

ModelArguments(model_name_or_path='t5-small', config_name=None, tokenizer_name=None, cache_dir=None, use_fast_tokenizer=False, model_revision='main', use_auth_token=False)
DataTrainingArguments(task='event', dataset_name=None, dataset_config_name=None, text_column=None, summary_column=None, train_file='data/text2tree/one_ie_ace2005_subtype/train.json', validation_file='data/text2tree/one_ie_ace2005_subtype/val.json', test_file='data/text2tree/one_ie_ace2005_subtype/test.json', overwrite_cache=False, preprocessing_num_workers=None, max_source_length=256, max_target_length=128, val_max_target_length=128, pad_to_max_length=False, max_train_samples=None, max_val_samples=None, max_test_samples=None, source_lang=None, target_lang=None, num_beams=None, ignore_pad_token_for_loss=True, source_prefix='event: ', decoding_format='tree', event_schema='data/text2tree/one_ie_ace2005_subtype/event.schema')
ConstraintSeq2SeqTrainingArguments(output_dir='models/CF_2022-05-20-14-30-29880_t5-small_tree_one_ie_ace2005_subtype_linear_lr1e-4_ls0_16_wu2000', overwrite_output_dir=False, do_train=True, do_eval=True, do_predict=True, evaluation_strategy=<IntervalStrategy.STEPS: 'steps'>, prediction_loss_only=False, per_device_train_batch_size=16, per_device_eval_batch_size=64, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=0.0001, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=30.0, max_steps=-1, lr_scheduler_type=<SchedulerType.LINEAR: 'linear'>, warmup_ratio=0.0, warmup_steps=2000, logging_dir='models/CF_2022-05-20-14-30-29880_t5-small_tree_one_ie_ace2005_subtype_linear_lr1e-4_ls0_16_wu2000_log', logging_strategy=<IntervalStrategy.STEPS: 'steps'>, logging_first_step=False, logging_steps=500, save_strategy=<IntervalStrategy.STEPS: 'steps'>, save_steps=500, save_total_limit=1, no_cuda=False, seed=421, fp16=False, fp16_opt_level='O1', fp16_backend='auto', fp16_full_eval=False, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name='models/CF_2022-05-20-14-30-29880_t5-small_tree_one_ie_ace2005_subtype_linear_lr1e-4_ls0_16_wu2000', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=True, metric_for_best_model='eval_role-F1', greater_is_better=True, ignore_data_skip=False, sharded_ddp=[], deepspeed=None, label_smoothing_factor=0.0, adafactor=False, group_by_length=False, report_to=['tensorboard'], ddp_find_unused_parameters=None, dataloader_pin_memory=True, skip_memory_metrics=False, sortish_sampler=False, predict_with_generate=True, constraint_decoding=True, label_smoothing_sum=False)
05/20/2022 14:30:10 - WARNING - __main__ -   Process rank: -1, device: cuda:0, n_gpu: 1distributed training: False, 16-bits training: False
05/20/2022 14:30:10 - INFO - __main__ -   Training/evaluation parameters ConstraintSeq2SeqTrainingArguments(output_dir='models/CF_2022-05-20-14-30-29880_t5-small_tree_one_ie_ace2005_subtype_linear_lr1e-4_ls0_16_wu2000', overwrite_output_dir=False, do_train=True, do_eval=True, do_predict=True, evaluation_strategy=<IntervalStrategy.STEPS: 'steps'>, prediction_loss_only=False, per_device_train_batch_size=16, per_device_eval_batch_size=64, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=0.0001, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=30.0, max_steps=-1, lr_scheduler_type=<SchedulerType.LINEAR: 'linear'>, warmup_ratio=0.0, warmup_steps=2000, logging_dir='models/CF_2022-05-20-14-30-29880_t5-small_tree_one_ie_ace2005_subtype_linear_lr1e-4_ls0_16_wu2000_log', logging_strategy=<IntervalStrategy.STEPS: 'steps'>, logging_first_step=False, logging_steps=500, save_strategy=<IntervalStrategy.STEPS: 'steps'>, save_steps=500, save_total_limit=1, no_cuda=False, seed=421, fp16=False, fp16_opt_level='O1', fp16_backend='auto', fp16_full_eval=False, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name='models/CF_2022-05-20-14-30-29880_t5-small_tree_one_ie_ace2005_subtype_linear_lr1e-4_ls0_16_wu2000', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=True, metric_for_best_model='eval_role-F1', greater_is_better=True, ignore_data_skip=False, sharded_ddp=[], deepspeed=None, label_smoothing_factor=0.0, adafactor=False, group_by_length=False, report_to=['tensorboard'], ddp_find_unused_parameters=None, dataloader_pin_memory=True, skip_memory_metrics=False, sortish_sampler=False, predict_with_generate=True, constraint_decoding=True, label_smoothing_sum=False)
05/20/2022 14:30:11 - WARNING - datasets.builder -   Using custom data configuration default-1e528a5b4868ef92
05/20/2022 14:30:11 - WARNING - datasets.builder -   Reusing dataset json (/home/xiaoli/.cache/huggingface/datasets/json/default-1e528a5b4868ef92/0.0.0/ac0ca5f5289a6cf108e706efcf040422dbbfa8e658dee6a819f20d76bb84d26b)
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 465.57it/s]
loading configuration file https://huggingface.co/t5-small/resolve/main/config.json from cache at /home/xiaoli/.cache/huggingface/transformers/fe501e8fd6425b8ec93df37767fcce78ce626e34cc5edc859c662350cf712e41.406701565c0afd9899544c1cb8b93185a76f00b31e5ce7f6e18bbaef02241985
Model config T5Config {
  "architectures": [
    "T5WithLMHeadModel"
  ],
  "d_ff": 2048,
  "d_kv": 64,
  "d_model": 512,
  "decoder_start_token_id": 0,
  "dropout_rate": 0.1,
  "eos_token_id": 1,
  "feed_forward_proj": "relu",
  "initializer_factor": 1.0,
  "is_encoder_decoder": true,
  "layer_norm_epsilon": 1e-06,
  "model_type": "t5",
  "n_positions": 512,
  "num_decoder_layers": 6,
  "num_heads": 8,
  "num_layers": 6,
  "output_past": true,
  "pad_token_id": 0,
  "relative_attention_num_buckets": 32,
  "task_specific_params": {
    "summarization": {
      "early_stopping": true,
      "length_penalty": 2.0,
      "max_length": 200,
      "min_length": 30,
      "no_repeat_ngram_size": 3,
      "num_beams": 4,
      "prefix": "summarize: "
    },
    "translation_en_to_de": {
      "early_stopping": true,
      "max_length": 300,
      "num_beams": 4,
      "prefix": "translate English to German: "
    },
    "translation_en_to_fr": {
      "early_stopping": true,
      "max_length": 300,
      "num_beams": 4,
      "prefix": "translate English to French: "
    },
    "translation_en_to_ro": {
      "early_stopping": true,
      "max_length": 300,
      "num_beams": 4,
      "prefix": "translate English to Romanian: "
    }
  },
  "transformers_version": "4.4.2",
  "use_cache": true,
  "vocab_size": 32128
}

loading configuration file https://huggingface.co/xlm-roberta-base/resolve/main/config.json from cache at /home/xiaoli/.cache/huggingface/transformers/87683eb92ea383b0475fecf99970e950a03c9ff5e51648d6eee56fb754612465.dfaaaedc7c1c475302398f09706cbb21e23951b73c6e2b3162c1c8a99bb3b62a
Model config XLMRobertaConfig {
  "architectures": [
    "XLMRobertaForMaskedLM"
  ],
  "attention_probs_dropout_prob": 0.1,
  "bos_token_id": 0,
  "eos_token_id": 2,
  "gradient_checkpointing": false,
  "hidden_act": "gelu",
  "hidden_dropout_prob": 0.1,
  "hidden_size": 768,
  "initializer_range": 0.02,
  "intermediate_size": 3072,
  "layer_norm_eps": 1e-05,
  "max_position_embeddings": 514,
  "model_type": "xlm-roberta",
  "num_attention_heads": 12,
  "num_hidden_layers": 12,
  "output_past": true,
  "pad_token_id": 1,
  "position_embedding_type": "absolute",
  "transformers_version": "4.4.2",
  "type_vocab_size": 1,
  "use_cache": true,
  "vocab_size": 250002
}

loading file https://huggingface.co/xlm-roberta-base/resolve/main/sentencepiece.bpe.model from cache at /home/xiaoli/.cache/huggingface/transformers/9df9ae4442348b73950203b63d1b8ed2d18eba68921872aee0c3a9d05b9673c6.00628a9eeb8baf4080d44a0abe9fe8057893de20c7cb6e6423cddbf452f7d4d8
loading file https://huggingface.co/xlm-roberta-base/resolve/main/tokenizer.json from cache at /home/xiaoli/.cache/huggingface/transformers/daeda8d936162ca65fe6dd158ecce1d8cb56c17d89b78ab86be1558eaef1d76a.a984cf52fc87644bd4a2165f1e07e0ac880272c1e82d648b4674907056912bd7
loading file https://huggingface.co/xlm-roberta-base/resolve/main/added_tokens.json from cache at None
loading file https://huggingface.co/xlm-roberta-base/resolve/main/special_tokens_map.json from cache at None
loading file https://huggingface.co/xlm-roberta-base/resolve/main/tokenizer_config.json from cache at None
Using bos_token, but it is not set yet.
loading weights file https://huggingface.co/t5-small/resolve/main/pytorch_model.bin from cache at /home/xiaoli/.cache/huggingface/transformers/fee5a3a0ae379232608b6eed45d2d7a0d2966b9683728838412caccc41b4b0ed.ddacdc89ec88482db20c676f0861a336f3d0409f94748c209847b49529d73885
All model checkpoint weights were used when initializing T5ForConditionalGeneration.

All the weights of T5ForConditionalGeneration were initialized from the model checkpoint at t5-small.
If your task is similar to the task the model of the checkpoint was trained on, you can already use T5ForConditionalGeneration for predictions without further training.
Assigning ['<extra_id_0>', '<extra_id_1>'] to the additional_special_tokens key of the tokenizer
Using bos_token, but it is not set yet.
Using cls_token, but it is not set yet.
Using mask_token, but it is not set yet.
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:03<00:00,  6.06ba/s]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00,  5.26ba/s]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00,  5.54ba/s]
***** Running training *****
  Num examples = 19216
  Num Epochs = 30
  Instantaneous batch size per device = 16
  Total train batch size (w. parallel, distributed & accumulation) = 16
  Gradient Accumulation steps = 1
  Total optimization steps = 36030
{'loss': 13.1629, 'learning_rate': 2.5e-05, 'epoch': 0.42}                                                                                               
  1%|█▌                                                                                                            | 500/36030 [01:53<2:15:04,  4.38it/s]***** Running Evaluation *****
  Num examples = 901
  Batch size = 64
Traceback (most recent call last):
  File "run_seq2seq.py", line 762, in <module>
    main()
  File "run_seq2seq.py", line 662, in main
    train_result = trainer.train(resume_from_checkpoint=checkpoint)
  File "/data/xiaoli/env/conda3/envs/text2event_test/lib/python3.8/site-packages/transformers/trainer.py", line 1105, in train
    self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
  File "/data/xiaoli/env/conda3/envs/text2event_test/lib/python3.8/site-packages/transformers/trainer.py", line 1198, in _maybe_log_save_evaluate
    metrics = self.evaluate()
  File "/data/xiaoli/env/conda3/envs/text2event_test/lib/python3.8/site-packages/transformers/trainer_seq2seq.py", line 74, in evaluate
    return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
  File "/data/xiaoli/env/conda3/envs/text2event_test/lib/python3.8/site-packages/transformers/trainer.py", line 1667, in evaluate
    output = self.prediction_loop(
  File "/data/xiaoli/env/conda3/envs/text2event_test/lib/python3.8/site-packages/transformers/trainer.py", line 1805, in prediction_loop
    loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
  File "/data/xiaoli/Text2Event_test/Text2Event-main/seq2seq/constrained_seq2seq.py", line 158, in prediction_step
    generated_tokens = self.model.generate(
  File "/data/xiaoli/env/conda3/envs/text2event_test/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
    return func(*args, **kwargs)
  File "/data/xiaoli/env/conda3/envs/text2event_test/lib/python3.8/site-packages/transformers/generation_utils.py", line 982, in generate
    return self.greedy_search(
  File "/data/xiaoli/env/conda3/envs/text2event_test/lib/python3.8/site-packages/transformers/generation_utils.py", line 1288, in greedy_search
    next_tokens_scores = logits_processor(input_ids, next_token_logits)
  File "/data/xiaoli/env/conda3/envs/text2event_test/lib/python3.8/site-packages/transformers/generation_logits_process.py", line 89, in __call__
    scores = processor(input_ids, scores)
  File "/data/xiaoli/env/conda3/envs/text2event_test/lib/python3.8/site-packages/transformers/generation_logits_process.py", line 460, in __call__
    mask[batch_id * self._num_beams + beam_id, self._prefix_allowed_tokens_fn(batch_id, sent)] = 0
  File "/data/xiaoli/Text2Event_test/Text2Event-main/seq2seq/constrained_seq2seq.py", line 137, in prefix_allowed_tokens_fn
    return self.constraint_decoder.constraint_decoding(src_sentence=src_sentence,
  File "/data/xiaoli/Text2Event_test/Text2Event-main/extraction/extract_constraint.py", line 90, in constraint_decoding
    valid_token_ids = self.get_state_valid_tokens(
  File "/data/xiaoli/Text2Event_test/Text2Event-main/extraction/extract_constraint.py", line 198, in get_state_valid_tokens
    state, index = self.check_state(tgt_generated)
  File "/data/xiaoli/Text2Event_test/Text2Event-main/extraction/extract_constraint.py", line 125, in check_state
    last_special_index, last_special_token = special_index_token[-1]
IndexError: list index out of range
  1%|█▌                                                                                                            | 500/36030 [01:53<2:14:07,  4.41it/s]

About Model Evaluation

Hi,
I have trained the model on ACE2005 dataset and I want to evaluate the model performance.
In Model Evaluation block, you give two script related to evaluation, which script should I choose?

If I have some sentence, how can I get the event table just like the paper shows?
image

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.