Comments (9)
There's no direct way to do this.
As a workaround, take generator for example.
You can refer to the source code and write a ElectraForMaskedLMWithAnyModel
that takes a pretrained AutoModel
instance as an argument.
from electra_pytorch.
Thank you for responding to my question. I got it working but am perhaps getting strange results from the training process. It always reports a training loss of 0.000000. Is this just because the model has already been well-trained enough?
Also is it normal for each training epoch to take only 1-2 seconds? Or is this a sign that my dataset that I set up was poorly configured?
Here is a screenshot of the output of the training process
from electra_pytorch.
There should be an error caught.
Because fastai didn't support specifying training steps, I wrote an callback myself to do that.
The side effect is it will catch any error we encountered.
So you can comment out this callback, run it again, and you will see the error.
After you resolve the error, you can add it back and do the normal training.
Line 394 in ab29d03
from electra_pytorch.
Oh perfect thank you. I was getting an error because I added a special token to the tokenizer and needed to notify the generator and discriminator of the new size of the token embeddings.
I am however getting a memory error now. Usually I resolve this by just lowering the batch size but I am not so sure where this is set in your code?
I am using a Nvidia Tesla P100 and this is the error message:
RuntimeError: CUDA out of memory. Tried to allocate 376.00 MiB (GPU 0; 15.90 GiB total capacity; 14.22 GiB already allocated; 291.75 MiB free; 14.73 GiB reserved in total by PyTorch)
Sorry to ask so many questions.
from electra_pytorch.
No worries !
Here is where batch size set.
Line 79 in ab29d03
You can change it by c.bs = whatever
after it.
from electra_pytorch.
Awesome, I got it working! I did have to lower my batch size all the way down to 32 w/ Google Colab pro though (quite a bit lower than your presets)
On another note, I took notice of your "multi_task.py" file and it interested me for my own research as well but I'll open a new issue so as to not bog this one down
from electra_pytorch.
Side question, How can we pretrain ELECTRA starting from weights from other pretrained models, such as roberta?
from electra_pytorch.
Hi, Thank you for the wonderful code.
I try to continue training based on google ELECTRA checkpoints. I followed the step in this post. I also comment out
RunSteps(c.steps,
[0.0625, 0.125, 0.25, 0.5, 1.0], c.run_name+"_{percent}"),
However, I still got the following error, which encountered in fastai learner file. Do you have any hints on this, I appreciate it.
`Traceback (most recent call last):
File "pretrain.py", line 405, in
cbs=[mlm_cb],
.....
File "/home/anaconda3/envs/electra/lib/python3.7/site-packages/fastai/learner.py", line 137, in _call_one
[cb(event_name) for cb in sort_by_run(self.cbs)]
NameError: name 'sort_by_run' is not defined`
I am not sure whether it is due to the package version.
from electra_pytorch.
Hi @JiazhaoLi.
Do you solve the problem (sort_by_run not found)? I also met the same error recently.
Update:
This error can be solved by downgrading fastcore version to fastcore<=1.3.13
.
from electra_pytorch.
Related Issues (20)
- How do I extract and save the discriminator from the checkpoint? HOT 3
- GPU utilization falls back to 0% when training with multiple GPUs HOT 5
- Sequence length too long for `ELECTRADataProcessor`. HOT 3
- Small typo in the README.md HOT 1
- How do I continue language model training? HOT 3
- Is multi_task.py in a working state and if so how should one use it? HOT 1
- Is it possible to perform the fine-tuning within the HuggingFace library? HOT 3
- How to load the cached data from ELECTRAProcessor? HOT 1
- 如何多卡并行训练模型(How to train multi-card models in parallel?) HOT 1
- Training time and ++ version HOT 1
- Pyarrow dataloading issue HOT 2
- Use different tokenizer (and specify special tokens) HOT 1
- SST-2 accuracy is 50% after finetuning HOT 10
- Relative importance of different "tricks" in README HOT 1
- NameError: name 'sort_by_run' is not defined HOT 2
- Custom Dataset
- License info is required
- Python version
- Restarting from previous checkpoint
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from electra_pytorch.