ukplab / controlled-argument-generation Goto Github PK
View Code? Open in Web Editor NEWControlling Argument Generation via topic, stance, and aspect
License: BSD 3-Clause "New" or "Revised" License
Controlling Argument Generation via topic, stance, and aspect
License: BSD 3-Clause "New" or "Revised" License
Solved
During training in your paper, did you only train one model on the data across all the topics? Or for every topic, train one model and test on that topic (that is, eight models are trained for the eight topics)?
If you only trained one model, then how to decide the --iterations ?
Thanks a lot!
Hi, thank you very much for providing this interesting paper.
I have a question regarding the aspect detection task. In the paper, it says you fine-tuned the BERT model with the new aspect-annotated data (i.e., the output of the crowd sourcing), but why do you use ipa in the script to detect aspects for the downstream task? Did these two models (the fine-tuned BERT-large and the ipa.aspect) give the same results? If not, is it possible for us to have the BIO-tagged data and the checkpoint of the final BERT model? We hope to replicate your experiment in aspect detection. Many thanks in advance for your time. :)
I'm trying to only use the generation component and have followed the Usage steps here and on the CTRL github. However, the generated output simply repeats the last word of my prompt.
On going through the warnings and output, I find the following:
WARNING:tensorflow:You are creating an Estimator from a Keras model manually subclassed from
Model, that was already called on some inputs (and thus already had weights). We are currently unable to preserve the model's state (its weights) as part of the estimator in this case. Be warned that the estimator has been created using a freshly initialized version of your model.
I found a related issue on CTRL github which was solved by updating the estimator.patch and/or keras.py files. I assume those changes are already implemented in the files in your repo.
Any idea on how I can deal with this issue?
Hi again, I'm trying to use my own data set in the pipeline following the steps you have listed. I've split the documents into json files as.When I run the argument-classification script, i get the following error:
$ python argument_classification.py --topic culture --index arguana
Start classifying sentences for topic "culture" from doc_id_start 0 with MAX_FILE_SIZE 200000, and FILTER_TOPIC set "True"". Writing to ../../training_data/arguana/culture/
0%| | 0/170 [00:00<?, ?it/s]
string indices must be integers
Crashed at doc_id 0
This error tends to arise from using a string index in dictionaries/JSONs incorrectly. Is there anything I need to change in the construction of the json files?
EDIT
Fixed the issue. I was creating the JSON files incorrectly.
import spacy
nlp = spacy.load('en_core_web_sm')
import json
def makeFile(lst):
d = {'sents':lst}
with open('doc.json', 'w') as filehandle:
json.dump(d, filehandle)
def makeSentList(var):
about_doc = nlp(var)
sentences = list(about_doc.sents)
sentences = [str(x) for x in sentences]
return sentences
One can use the above functions to make a JSON file for the particular task. var is the document to process and lst is the output of the makeSentList function.
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
(0) Resource exhausted: OOM when allocating tensor with shape[256] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node encoder/encoder_layer/layer_normalization/moments/variance (defined at /tmp/tmpxfTS75.py:8) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[tied_embedding_softmax_1/add/_3091]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
(1) Resource exhausted: OOM when allocating tensor with shape[256] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node encoder/encoder_layer/layer_normalization/moments/variance (defined at /tmp/tmpxfTS75.py:8) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
N.B I am using GPU with Colab Pro (16 GB GPU) is it enough or suggest me some!
Hi,
Thanks for sharing the code! I have a quick questions about the model inputs: do you use any special token (e.g., [SEP]) to separate the topic, stance, aspects and arguments, or you just simply concat them together? Really appreciate it for your reply. : )
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.