andrew03 / transformer-abstractive-summarization Goto Github PK
View Code? Open in Web Editor NEWCode for the paper "Efficient Adaption of Pretrained Transformers for Abstractive Summarization"
Code for the paper "Efficient Adaption of Pretrained Transformers for Abstractive Summarization"
Hi,
I am trying to reproduce the results from this paper for CNN dailymail data. Is it possible to share the pretrained weights for cnndm dataset?
Thanks
Hi Andrew,
This repo is super great and helpful. But I encountered an error when doing validation.
Traceback (most recent call last):
File "/train.py", line 283, in
main(args)
File "/train.py", line 227, in main
start_iter, running_loss = run_epoch(start_iter, running_loss, dh_model, summary_loss, model_opt, train_loader, val_loader, train_log_interval, val_log_interval, device, beam, gen_len, k, decoding_strategy, accum_iter, "FT Training Epoch [{}/{}]".format(i + 1, args.num_epochs_ft), save_dir, logger, text_encoder, show_progress=args.show_progress)
File "/train.py", line 132, in run_epoch
val_loss, scores = evaluate(val_loader, train_log_interval, model, text_encoder, device, beam, gen_len, k, decoding_strategy, summary_loss if summary_loss else compute_loss_fct)
File "/train.py", line 97, in evaluate
src_strs, new_refs, new_hyps = generate_outputs(model, pad_seq, mask_seq, text_encoder, device, beam, gen_len, k, decoding_strategy)
File "/generate.py", line 18, in generate_outputs
outputs = model(pad_output, mask_output, text_encoder, device, beam=beam, gen_len=gen_len, k=k, decoding_strategy=decoding_strategy, generate=True, min_len=min_len)
File "/anaconda3/envs/apex/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "/anaconda3/envs/apex/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 152, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/anaconda3/envs/apex/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 162, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/anaconda3/envs/apex/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 85, in parallel_apply
output.reraise()
File "/anaconda3/envs/apex/lib/python3.7/site-packages/torch/_utils.py", line 385, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/anaconda3/envs/apex/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker
output = module(*input, **kwargs)
File "/anaconda3/envs/apex/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "/model_pytorch.py", line 220, in forward
return self.generate(pad_output, mask_output, text_encoder, device, beam, gen_len, k, decoding_strategy, min_len=min_len)
File "/model_pytorch.py", line 365, in generate
generated_toks = self.beam_search(XMB, mask, classify_idx, text_encoder, beam=beam, gen_len=gen_len, min_len=min_len)
File "/model_pytorch.py", line 333, in beam_search
if finished_mask[i].item() == 1:
RuntimeError: CUDA error: an illegal memory access was encountered
It seems like a error about the pytorch and cudatoolkit version. (my env: pytorch==1.3.0, cudatoolkit==10.1)
Have you encountered this error?
Besides, the code cannot debug in Pycharm IDE,
Thanks :)
This repo is super great - I hope all goes well with the paper submission!
I was looking to take a model like this and play with it but doing the Adaptive training myself (although you have good instructions) would require 20 epochs of training, if I read the paper correctly. This wouldn't be too bad except that each epoch takes over 12 hours on the best GPUs I have access to. I could go through all the work to get the results but it would be expensive and I'd prefer not to.
Is there anyway you could upload the weights you used for the summarization tasks? If that's not feasible I would understand but I thought I would ask.
Thanks!
Hello,
Thank you for this work, I find it very interesting.
I have an issue running the code of evaluation on the test set after training the model. I get this error when the program tries to load the weights of the checkpoint model
Traceback (most recent call last): File "/home/users/jlopez/codes/transabs/evaluate.py", line 184, in <module> main(args) File "/home/users/jlopez/codes/transabs/evaluate.py", line 131, in main dh_model.load_state_dict(state_dict) File "/home/users/jlopez/.conda/envs/transabs_env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 769, in load_state_dict self.__class__.__name__, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for LMModel: Unexpected key(s) in state_dict: "transformer.article_embed.weight", "transformer.summary_embed.weight".
I hope you can help me fix the problem soon. Thank you!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.