This is pytorch simple implementation of Pre-training of Deep Bidirectional Transformers for Language Understanding (BERT) by using awesome pytorch BERT library
-
IMDB(Internet Movie Database) To test model, I use a dataset of 50,000 movie reviews taken from IMDb. It is divied into 'train', 'test' dataset and each data has 25,000 movie reviews and labels(positive, negetive). You can access to dataset with this link
-
Naver Movie review It is well scrapped dataset from Naver movie review(Korean). link
Follow the example
There is a lot of options to check.
- train_path : A File to train model
- valid_path : A File to valid model
- max_length : Maximum length of word to analysis (BERT model restrict this parameter under 512)
- save_path : A Path to save result of BERT classfier model
- bert_name : The name of pretrained BERT model. Defalut : bert-base-uncased ( More information about pytorch-BERT model can be found in this link )
- bert_finetuning : If you want to fintune BERT model with classfier layer, set "True" for this option
- dropout_p : Drop probability of BERT result vector before enter to classfier layer
- boost : If you don't need to fine tune BERT, you can make model faster to preconvert tokens to BERT result vectors.
- n_epochs : A number of epoches to train
- lr : learning rate of classfier layer
- lr_main : learning rate of BERT for fine tune
- early_stop : A early_stop condition. If you don't want to use this options, put -1
- batch_size : Batch size to train
- gradient_accumulation_steps : BERT is very heavy model to handle large batch size with light GPU. So I implement gradient accumulation to handle samller batch size but almost same impact of using large batch size
python train.py --train_path source/train.csv --valid_path source/test.csv --batch_size 16 --gradient_accumulation_steps 4 --boost True
Result with hyper parameter settings
Dataset | BERT pretrained | BERT finetune | Max token Length | Best Epoch | train loss | valid loss | valid accuracy |
---|---|---|---|---|---|---|---|
IMDB | bert-base-uncased | True | 256 | 1 | 0.0169 | 0.0129 | 0.9181 |
IMDB | bert-base-uncased | True | 512 | 1 | 0.0151 | 0.0112 | 0.9292 |
IMDB | bert-base-uncased | False | 256 | 10 | 0.0289 | 0.0276 | 0.8027 |
IMDB | bert-base-uncased | False | 512 | 10 | 0.0269 | 0.0259 | 0.8194 |
Naver | bert-base-multilingual-cased | True | 512 | 4 | 0.0135 | 0.0199 | 0.8743 |
Naver | bert-base-multilingual-uncased | True | 512 | 4 | 0.0126 | 0.0198 | 0.8743 |
Naver | kobert | True | 512 | 2 | 0.0145 | 0.0163 | 0.8961 |
Fintuning result is remarkable and stunning. But just using a BERT output(wihtout fintuning) and put it through a single linear layer is not enought to handle data.
My pytorch implementation is highly impressed by other works. Please check below to see other works.