Giter Club home page Giter Club logo

Comments (13)

xchgaur avatar xchgaur commented on September 25, 2024 3

but if I Have to decrease the learning rate, I can do that while training right?
are there some defined steps to stop and start the training or I can do ctrl+c and run "python train.py" again!

from nmt-chatbot.

daniel-kukiela avatar daniel-kukiela commented on September 25, 2024 1

You can stop training process.
If you start it again, it'll begin at previous full 5k steps, so it;s best to stop right after next full 5k steps (you'll notice that script is inferencing against dev and test sets and saving hparams, stop right after that moment, i don't have a screenshot now to show you that).

You can use multiple files, just iterate list of files on top of opening a single file (https://pythonprogramming.net/building-database-chatbot-deep-learning-python-tensorflow/ - add a loop above with open...)

from nmt-chatbot.

daniel-kukiela avatar daniel-kukiela commented on September 25, 2024 1

@xchgaur yes, you can change learning rate, but i highly suggest doing that in hparams file in model folder. You can change it in config and enable hparams override, but that might actually reset progress counter.
You should stop training only every 5k steps (or at the end of epochs). You know that moment bc there are tests and dev files evaluated - stop right after those evaluations.

from nmt-chatbot.

spiderwisp avatar spiderwisp commented on September 25, 2024

Perhaps I'm doing something wrong, but any time I add new data and then run prepare_data.py on it and try to train.py I get an error like this:

InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [531,512] rhs shape= [507,512]
         [[Node: save/Assign_39 = Assign[T=DT_FLOAT, _class=["loc:@embeddings/embedding_share"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](embeddings/embedding_share, save/RestoreV2:39)]]

from nmt-chatbot.

daniel-kukiela avatar daniel-kukiela commented on September 25, 2024

You can't change data during training process. So if you change anything in training set, remove model folder and start training again.

from nmt-chatbot.

spiderwisp avatar spiderwisp commented on September 25, 2024

Hm, so there's no way then to take an existing model and "teach it new things"?

I guess I need to understand tensorflow more to understand this part.

from nmt-chatbot.

spiderwisp avatar spiderwisp commented on September 25, 2024

What I'm still a bit confused on is the Learning Rate. It only seems to be effective if I set it before running prepare_data.py. I'm not sure how to adjust this parameter once training commences.

from nmt-chatbot.

daniel-kukiela avatar daniel-kukiela commented on September 25, 2024

@spiderwisp Yes, you can't add data to that model, as that will change vocab file, which cannot change - word embedding won't match (network learns not vocab elements itself, but more like vocab element number).
About learning rate - response above :)

from nmt-chatbot.

spiderwisp avatar spiderwisp commented on September 25, 2024

Thanks! I did try tweaking the learning rate in model hparams after 5,000 steps, but it didn't go well. I ended up with the following :)

# Final, step 5300 lr 0.0005 step-time 0.00 wps 0.00K ppl 0.00, dev ppl inf, dev bleu 0.0, test ppl inf, test bleu 0.0, Wed Mar  7 19:04:27 2018
# Done training!, time 158s, Wed Mar  7 19:04:27 2018.
# Start evaluating saved best models.

from nmt-chatbot.

daniel-kukiela avatar daniel-kukiela commented on September 25, 2024

You probably shouldn't touch learning rate until full epoch, than lower LR for the next one :)

from nmt-chatbot.

spiderwisp avatar spiderwisp commented on September 25, 2024

Awesome, thanks for the advice. Is there a better place to discuss your project than here in the issues section? One idea I had for this was to scrape something like Stack Overflow and feed it in. My thought process is that if I feed in the questions and use the code in the answers as the train.to data, that we could train it to output code by asking it (feeding the model) a coding question.

from nmt-chatbot.

spiderwisp avatar spiderwisp commented on September 25, 2024

If we could get that bot working, we could integrate it with an IDE like NetBeans. It could really speed up coding time. Asking it how to do something and it outputs the code in the IDE 👍

from nmt-chatbot.

daniel-kukiela avatar daniel-kukiela commented on September 25, 2024

Nice idea!
There is a Discord server we are active in: https://discord.gg/57xEY2

from nmt-chatbot.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.