Source code for wikidit.
This requires the Anaconda distribution with Python 3.6, since a conda environment is used to manage dependencies.
Create and activate a conda environment with the dependencies for this project.
$ conda env create --force -f environment.yml
$ conda activate wikidit
The app can be locally for development with the Flask development web-server using:
$ python app.py
It can be run in production with a WSGI app like gunicorn,
$ gunicorn --bind 0.0.0.0:8000 app
Download texts for revisions in the training sample from the Wikipedia API.
$ python -m download_enwiki_wp10_revisions.py \
rawdata/enwiki.labeling_revisions.nettrom_30k.json \
enwiki.labeling_revisions.w_text.nettrom_30k.ndjson.gz
Add features to the training data.
$ python -m wikidit.scripts.add_features \
enwiki.labeling_revisions.nettrom_30k.json \
enwiki-labeling_revisions-w_features
The predictive model used in the app is defined in the notebook notebooks/quality_predictions.ipynb
. This will update the pickled model at
wikidit/xgboost-sequential.pkl
.
$ jupyter nbconvert --execute --to notebook --inplace notebooks/quality_predictions.ipynb
The file enwiki.labeling_revisions.nettrom_30k.json is a sample of 30,000+ revisions, equally balanced between the Stub, Start, C, B, and A categories. This is used for training Mediawiki's prediction model in the articlequality package.