Giter Club home page Giter Club logo

adalabel's Introduction

AdaLabel

Code/data for ACL'21 paper "Diversifying Dialog Generation via Adaptive Label Smoothing".

We implemented an Adaptive Label Smoothing (AdaLabel) approach that can adaptively estimate a target label distribution at each time step for different contexts. Our method is an extension of the traditional MLE loss. The current implementation is designed for the task of dialogue generation. However, our approach can be readily extended to other text generation tasks such as summarization. Please refer to our paper for more details.

Our implementation is based on the OpenNMT-py project, therefore most behaviors of our code follow the default settings in OpenNMT-py. Specifically, we forked from this commit of OpenNMT-py, and implemented our code on top of it. This repo reserves all previous commits of OpenNMT-py and ignores all the follow-up commits. Our changes can be viewed by comparing the commits.

Our code is tested on Ubuntu 16.04 using python 3.7.4 and PyTorch 1.7.1.

How to use

Step 1: Setup

Install dependencies:

conda create -n adalabel python=3.7.4
conda activate adalabel
conda install pytorch==1.7.1 cudatoolkit=10.1 -c PyTorch -n adalabel 
pip install -r requirement.txt

Make folders to store training and testing files:

mkdir checkpoint  # Model checkpoints will be saved here
mkdir log_dir     # The training log will be placed here
mkdir result      # The inferred results will be saved here

Step 2: Preprocess the data

The data can be downloaded from this link. After downloading and unzipping, the DailyDialog and OpenSubtitle datasets used in our paper can be found in the data_daily and data_ost folders, respectively. We provide a script scripts/preprocess.sh to preprocess the data.

bash scripts/preprocess.sh

Note:

  • Before running scripts/preprocess.sh, remember to modify its first line (i.e., the value of DATA_DIR) to specify the correct data folder.
  • The default choice of our tokenizer is bert-base-uncased

Step 3: Train the model

The training of our model can be performed using the following script:

bash scripts/train_daily.sh   # Train models on the DailyDialog dataset

or

bash scripts/train_ost.sh     # Train models on the OpenSubtitle dataset

Note:

  • The resulting checkpoints will be written to the checkpoint folder.
  • By default, our script uses the first available GPU.
  • Once the training is completed, the training script will log out the best performing model on the validation set.
  • Experiments in our paper are performed using TITAN XP with 12GB memory.

Step 4: Inference

The inference of our model can be performed using the following script:

bash scripts/inference_daily.sh {which GPU to use} {path to your model checkpoint}   # Infer models on the DailyDialog dataset

or

bash scripts/inference_ost.sh {which GPU to use} {path to your model checkpoint}     # Infer models on the OpenSubtitle dataset

Note:

  • Inferred outputs will be saved to the result folder.

Step 5: Evaluation

The following script can be used to evaluate our model based on the inferred outputs obtained in Step 4:

python scripts/eval.py {path to the data folder} {path to the inferred output file}

Citation

Please cite our paper if you find this repo useful :)

@inproceedings{wang2021adalabel,
  title={Diversifying Dialog Generation via Adaptive Label Smoothing},
  author={Wang, Yida and Zheng, Yinhe and Jiang, Yong and Huang, Minlie},
  booktitle={Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
  year={2021}
}

Issues and pull requests are welcomed.

adalabel's People

Contributors

srush avatar vince62s avatar bpopeters avatar sebastiangehrmann avatar jianyuzhan avatar pltrdy avatar da03 avatar bmccann avatar guillaumekln avatar soumith avatar francoishernandez avatar adamlerer avatar flauted avatar helson73 avatar wjbianjason avatar justinchiu avatar waino avatar meocong avatar apaszke avatar xutaima avatar thammegowda avatar gwenniger avatar scarletpan avatar tayciryahmed avatar jsenellart avatar colesbury avatar henry-e avatar taolei87 avatar lemon234071 avatar chenbeh avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.