Comments (6)
@DataTerminatorX Thank you for making the feedback! Can you post the training logs? Did the loss converge? Do you have a develop set to see if the model's overfitted?
from pytorch-seq2seq.
@kylegao91 Thx for reply. Here is the training log. Seems that the model trained well in term of perplexity while the responses were meaningless
2017-09-05 14:29:29,968 seq2seq.trainer.supervised_trainer INFO Time elapsed: 11m 30s, Progress: 100%, Train Perplexity: 608.8577
2017-09-05 14:29:31,554 seq2seq.trainer.supervised_trainer INFO Finished epoch 10, Dev Perplexity: 591.7412
Type in a source sequence:hi
['EOS']
Type in a source sequence:hello
['EOS']
Type in a source sequence:how are you
['EOS']
The hyper parameters are
hidden_size = 64
batch_size = 256
num_epoch = 10
For fast training and validation, the model is trained on a sampled dataset. I attached them here
tiwtter_dataTrain_sample.txt
tiwtter_dataDev_sample.txt
I've once trained the model on the whole dataset (700,000 pairs in train and 50,000 pairs in dev), while almost get the same responses as trained on the sample dataset
from pytorch-seq2seq.
BTW, I found this pytorch-seq2seq
project well organized from programming design view. Hope I could contribute to this project once this issue is solved. We could add a chatbot feature to it.
from pytorch-seq2seq.
@DataTerminatorX Thanks for expressing the interests. I will look into the issue and let you know.
from pytorch-seq2seq.
I run the sample data with hidden_size=256, and I found that the log you show is far from convergence. See the log and some results (still don't make much sense but better than empty) I got after 20 epoches below:
2017-09-05 17:14:03,380 seq2seq.trainer.supervised_trainer INFO Progress: 95%, Train Perplexity: 9.3085
2017-09-05 17:14:56,364 seq2seq.trainer.supervised_trainer INFO Progress: 95%, Train Perplexity: 7.7437
2017-09-05 17:15:49,935 seq2seq.trainer.supervised_trainer INFO Progress: 96%, Train Perplexity: 8.1597
2017-09-05 17:16:42,897 seq2seq.trainer.supervised_trainer INFO Progress: 97%, Train Perplexity: 8.1890
2017-09-05 17:17:35,497 seq2seq.trainer.supervised_trainer INFO Progress: 98%, Train Perplexity: 7.7902
2017-09-05 17:18:28,137 seq2seq.trainer.supervised_trainer INFO Progress: 99%, Train Perplexity: 8.5640
2017-09-05 17:19:21,026 seq2seq.trainer.supervised_trainer INFO Progress: 99%, Train Perplexity: 8.4239
2017-09-05 17:20:00,278 seq2seq.trainer.supervised_trainer INFO Finished epoch 20: Train Perplexity: 8.1201, Dev Perplexity: 6.3091
Type in a source sequence:how are you
[u"you're", u'me', u'ever', u'amp', u'what', u'are', u'you', u'to', '<eos>']
Type in a source sequence:i'm dead not looking forward to thi
[u"shouldn't", u'has', u'the', u'natural', u'answer', u'to', u'hashtag', u'hashtag', u'hashtag', u'to', '<eos>']
Type in a source sequence:i m waiting for to wake up p
[u'then', u'some', u'some', u'sf', u'hipsters', u'would', u'rather', u'get', u'rid', u'of', u'and', '<eos>']
Type in a source sequence:and then there's these folks
[u'the', u'good', u'place', '<eos>']
Type in a source sequence:i'm good how are you
[u'good', u'good', u'too', u'what', u'are', u'you', '<eos>']
Type in a source sequence:how are you
[u"you're", u'me', u'ever', u'amp', u'what', u'are', u'you', u'to', '<eos>']
Type in a source sequence:hellow
[u"it's", u"we're", u'all', u'i', u"can't", '<eos>']
Type in a source sequence:hello
[u'girl', u'girl', u'maria', u'here', u'at', u'here', u'mocking', u'mocking', u'mocking', '<eos>']
Type in a source sequence:my life story
[u'this', u'is', u'why', u'is', u"we're", '<eos>']
from pytorch-seq2seq.
@DataTerminatorX I will close this issue. Let us know if you have more questions.
from pytorch-seq2seq.
Related Issues (20)
- RuntimeError occurs running integration_test.py HOT 1
- How is the memory optimized when using pre-trained embeddings like FastText and etc? HOT 1
- Dropout error using external embeddings HOT 3
- Dev branch: toy training stops after 2 epochs HOT 2
- Main advantages of develop branch? HOT 1
- beam search
- TopKDecoder HOT 6
- Decode function in decoder HOT 2
- GPU error when run sample code HOT 4
- RuntimeError when running Samplescript without attention in the develop branch HOT 2
- Memory leak HOT 2
- Error for cuda and cpu HOT 8
- .travis.yml: The 'sudo' tag is now deprecated in Travis CI
- pre-trained word embedding HOT 1
- Teacher forcing during beam decoding
- The dimension of predicted_softmax in DecoderRNN.py
- Out of memory for NLLLoss even the batch size is small
- Teacher forcing per timestep? HOT 1
- About section
- AttributeError: module 'torchtext.data' has no attribute 'Field' HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pytorch-seq2seq.