Giter Club home page Giter Club logo

gate's Introduction

gate's People

Contributors

wasiahmad avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

gate's Issues

Share the ACE preprocessing code

As far as I know, model performance on the ACE dataset is really sensitive to how you preprocess the data. I notice that you refer to the paper Cross-lingual Structure Transfer for Relation and Event Extraction for preprocessing details, but they didn't share their code. I wonder whether you would like to share the preprocessing code.

excuse me

How are negative examples of data generated during the training phase

What's the value for "subj_type" when performing EARL?

In previous issue, you said that "subject and object refer to an event trigger and one of its arguments", but the trigger usually doesn't refer to any entity, so how should I deal with the value of "subj_type". Could you please give me some advice?

aligned.embed.300.vec?

I prepared for the data ,but when i run "setup.sh",there is an error ,aligned.embed.300.vec,this file is not found .how can i get it,where can i download it?

POS-feature is useful???

Hello, may I ask whether you are using the POS feature? Because I find that the overall performance will decline after using the POS feature 。

use pos:59.03 | 62.35 | 59.70 | 70.09 | 54.3 | 57.4
not use:63.90| 68.8 | 59.4 | 68.8 | 55.14 | 59.00

What's the data in vocab.txt and how can I get it?

The ACE2005 dataset is prepared and I preprocessed it just as you describe. But when I run the script in /data/setup.sh, there is an error bacause I don't have the file 'vocab.txt'. Can you tell me what is the data in it and how can I get it?

A query about your paper

I have two question struggle me for a long time, can you give me an answer? Thank you a lot.
Q1:You say "a k-layers GCN aggregates information of words that are k hop away. Such a way of embedding structure may hinder
cross-lingual transfer when the source and target languages have different path length distributions among words". Is the path distance refer to the distance in the syntactic structure? But the Gate and GCN both use the syntactic structure, what's the different?
Q2:You say "GCNs struggle to model words with long-range dependencies", is it beacuse it only aggregates information of words that are k hop away? But the Gate also attend the distance within delta, what's the different?

parameters

Hello, I saw a lot of experimental data in the appendix of the paper on GATE. I would like to know whether these experiments were carried out
Uniform parameter Settings, for example:
--struct_position True
--position_dim 30
--max_relative_pos 64
What do you remember about these Settings? I want to run your results according to your parameters?
Could you please tell me the parameters

Something puzzling on experiments

Hi, when talking about the experiments on sensitivity towards source language, I notice that you translate English test set into Chinese using Google Cloud Api, and I'm wonder whether you still need to label this translated Chinese example manually or by script(did you give the code?)

Moreover, I notice your another paper Syntax-augmented Multilingual BERT for Cross-lingual Transfer, I want to know if Syntax-augmented MBERT can be used for relation extraction and event extraction tasks in this paper. Have you tested it, and if so, how effective is it?😇

Thanks for your answering.

How should I handle the data for EARL?

In the 'load_data' function of file 'cile.inputters.utils.py', it seems that there are only codes for relation exraction, ie "sentence.relation = ex['relation']". How should I handle the data and code to perform EARL? Could you please give me some insturction?

A query in the paper

Hello, when I read the paper in Task Description(part 2), I notice the word "zero-short", do you mean zero-shot learning?
1634351154(1)

What is negative samples?

In the paper(Subburathinam et al. 2019) you followed for data preprocessing, they said "We downsample the negative training instances by limiting the number of negative samples to be no more than the number of positive samples for each document". So what's the meaning of negative samples for argument role labeling, and how did you deal with this?

vocab.txt

Hello, can you share your vocab. TXT file? In the VOCab. TXT file generated by me, the sorting of features is different, so the experimental performance cannot be consistent with yours

Confusion over Dataset Processing

Excuse me, I would like to ask you a question. I am very interested in your paper and want to do an experiment. But I don't quite understand the data processing part. Which directory should ACE2005 dataset be placed in? And what format is it needed to handle? Thank you very much for your answering.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.