Giter Club home page Giter Club logo

ulmfit-for-german's Introduction

ULMFIT for German

A pre-trained German language model for ULMFIT.

Because German is more inflective than English, it uses sub-word tokenzation to split text into chunks. It uses a pre-trained model BPEmb by Benjamin Heinzerling and Michael Strube with a fixed vocabulary size of 25k.

For instance, this German sentence:

'Zeitungskommentare sind eine hervorragende Möglichkeit zum Meinungsaustausch'

is broken in the following tokens:

['_zeitungs', 'komment', 'are', '_sind', '_eine', '_hervor', 'ragende', '_möglichkeit', '_zum', '_meinungs', 'austausch', '.']

Download

Download the model (~350mb) directly or use wget:

wget https://github.com/jfilter/ulmfit-for-german/releases/download/0.1.0/ulmfit_for_german_jfilter.pth

Usage

from bpemb import BPEmb
from cleantext import clean
from fastai.text import *

# this will download the required model for sub-word tokenization
bpemb_de = BPEmb(lang="de", vs=25000, dim=300)

# contruct the vocabulary
itos = dict(enumerate(bpemb_de.words + ['xxpad']))
voc = Vocab(itos)

# encode all tokens as IDs
df_train['text'].apply(lambda x: bpemb_de.encode_ids_with_bos_eos(clean(x, lang='de')))

# setup language model data
data_lm = TextLMDataBunch.from_ids('exp', vocab=voc, train_ids=df_train['text'], valid_ids=df_valid['text'])

# setup learner, download the model beforehand
# because of some breaking changes with fastai, change the number of hidden layers. https://github.com/jfilter/ulmfit-for-german/issues/1
config = awd_lstm_lm_config.copy()
config['n_hid'] = 1150
learn_lm = language_model_learner(data_lm, AWD_LSTM, drop_mult=0.5, pretrained=False, config=config)
learn_lm.load('ulmfit_for_german_jfilter') # the model should be placed here `exp/models/ulmfit_for_german_jfilter.pth`

# ... training language model etc. ...

# setup test classifier data
# NB: set the padding index to 25000 (id of 'xxpad', see above)
classes = df_train['label'].unique().tolist()
data_clas = TextClasDataBunch.from_ids('exp', pad_idx=25000, vocab=voc, classes=classes,
    train_lbls=df_train['label'], train_ids=df_train['text'],
    valid_lbls=df_valid['label'], valid_ids=df_valid['text'])

See the Notebook for a complete walkthrough. NB: The notebook doesn't run with latest version the fastai library. How to fix this.

Training of the Language Model

The model was trained on the German Wikipedia and as well a national and regional German news articles. Only documents with a length of at least 500 were kept. This results into 3.278.657 documents with altogether 1.197.060.244 tokens. 5% of data is the validation set (chosen randomly) and the rest is the training set. To train it, we take the default configuration of the English language but fully disable dropout. The amount of data is large enough such thats a strong regularization is not needed. We trained for five epochs which took three days on a single GTX1800ti, with a batch size of 128. The learning rate is chosen automatically by the learning rate finder (0.007..). The training curves are in shown below and the final model achieves a perplexity of 32.52.. on the validation set.

Training curves of the language model.

Experiments

To test the performance of the model, experiments on the experiment dataset 10kGND were conducted. The dataset consists of about 10k German news articles in 9 different classes. The best model achieved a accuray on the validation and test set of 91% and 88.3%, respectively. The details are in the accompanying Notebook.

Citation

If you find the langauge model useful for an academic publication, then please use the following BibTeX to cite it:

@misc{ulmfit_german_filter_2019
    title={A Pre-trained German Language Model with Sub-word Tokenization for ULMFIT},
    author={Johannes Filter},
    year={2019},
    publisher={GitHub},
    howpublished={\url{https://github.com/jfilter/ulmfit-for-german},
}

License

MIT.

ulmfit-for-german's People

Contributors

jfilter avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

Forkers

nrteam jaqujaqu

ulmfit-for-german's Issues

Any chance to share a predict statement of your classifier?

Thank you for your pretrained language model!

When trying to evaluate individual inputs, I get:

KeyError: tensor(0)

I tried multiple syntax but the one that I would have expected to work is this one:

learn.predict(df_train['text'].iloc[0])

Error when using your pretrained language model for the 10kGNAD data

Hey,
unfortunately, I am getting the following error when running your exemplary code for the 10kGNAD data set. It occurs when loading the model (learn_lm.load('ulmfit_for_german_jfilter')). I hope you can help me.

Best, Jacob

RuntimeError: Error(s) in loading state_dict for SequentialRNN:
size mismatch for 0.rnns.0.weight_hh_l0_raw: copying a param with shape torch.Size([4600, 1150]) from checkpoint, the shape in current model is torch.Size([4608, 1152]).
size mismatch for 0.rnns.0.module.weight_ih_l0: copying a param with shape torch.Size([4600, 400]) from checkpoint, the shape in current model is torch.Size([4608, 400]).
size mismatch for 0.rnns.0.module.weight_hh_l0: copying a param with shape torch.Size([4600, 1150]) from checkpoint, the shape in current model is torch.Size([4608, 1152]).
size mismatch for 0.rnns.0.module.bias_ih_l0: copying a param with shape torch.Size([4600]) from checkpoint, the shape in current model is torch.Size([4608]).
size mismatch for 0.rnns.0.module.bias_hh_l0: copying a param with shape torch.Size([4600]) from checkpoint, the shape in current model is torch.Size([4608]).
size mismatch for 0.rnns.1.weight_hh_l0_raw: copying a param with shape torch.Size([4600, 1150]) from checkpoint, the shape in current model is torch.Size([4608, 1152]).
size mismatch for 0.rnns.1.module.weight_ih_l0: copying a param with shape torch.Size([4600, 1150]) from checkpoint, the shape in current model is torch.Size([4608, 1152]).
size mismatch for 0.rnns.1.module.weight_hh_l0: copying a param with shape torch.Size([4600, 1150]) from checkpoint, the shape in current model is torch.Size([4608, 1152]).
size mismatch for 0.rnns.1.module.bias_ih_l0: copying a param with shape torch.Size([4600]) from checkpoint, the shape in current model is torch.Size([4608]).
size mismatch for 0.rnns.1.module.bias_hh_l0: copying a param with shape torch.Size([4600]) from checkpoint, the shape in current model is torch.Size([4608]).
size mismatch for 0.rnns.2.module.weight_ih_l0: copying a param with shape torch.Size([1600, 1150]) from checkpoint, the shape in current model is torch.Size([1600, 1152]).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.