Giter Club home page Giter Club logo

coder_reviewer_reranking's Introduction

Coder Reviewer Reranking for Code Generation

made-with-python Code style: black

Official code release for the paper Coder Reviewer Reranking for Code Generation.

Setup

Downloading data and cached outputs

  1. For convenience, we include data used for this project in dataset.zip. You need to download and unzip this file before using this repo. These include
  • HumanEval. We also include the prompt used in the CodeT paper
  • MBPP, which includes both the sanitized version and the initial version.
  • Spider includes the evaluation script and the data. We also include the cached outputs from executing the groundtruth SQL queries.
  • NL2BASH
  1. Samples and precomputed execution results can be found in samples.zip

Installing software environment

  1. All experiments are run with python==3.8.13.
  2. Install pyminifier from source. Installing pyminifier requires reverting setup tools to an older version (pip install setuptools==57.5.0). For other issues of installing pyminifier, checkout their issues for potential fixes.
  3. Install torch==1.12.1. You should install a distribution that matches your hardware environment
  4. Install the other packages by
pip install -r requirements.txt

Usage

Running the selector with released outputs

  1. We release samples obtained from the OpenAI codex API in samples.zip. Unzipping this file, you should see a folder with the below structure
samples
├── codex-cushman
│   ├── codet_humaneval
│   └── mbpp_sanitized
├── codex001
└── codex002

We will go over the code/commands you need to collect these samples in a later section. 2. Run the following script to compare different reranking methods.

model="codex002"
dataset="mbpp_sanitized"
outdir="result_db"
python sample_selectors.py --model ${model} \
    --num_samples_end 25 \
    --num_samples_gap 5 \
    --data_path samples \
    --out_dir ${outdir} \
    --dataset ${dataset} \
    --num_procs 10 \
    --num_bootstraps 50 \
    --temperature 0.4 \
    --verbose\
  1. We have included the execution results of all generated samples in the samples.zip. If you want to execute the generated programs yourself, you can run the following command. Typically, we leverage aggressive multiprocessing to speed up this process. You can change the number of processes by modifying nprocs. Modify the model and dataset arguments to execute other models and datasets.
model="codex002"
dataset="codet_humaneval"
nprocs=25
torchrun --nproc_per_node=${nprocs} multi_exec.py --temperature 0.4 --world_size 25 --dataset ${dataset} --in_data_path samples/${model} --batch_size 4 --num_seeds 25 --num_samples 5 --num_prompts 0

The outputs will look like and a dictionary object containing the result will be saved into result_db

sum_logprob 0.5587 0.01
avg_logprob 0.5832 0.01
avg_reverse_logprob 0.5626 0.01
random 0.5562 0.01
sumreverselogprob-ensemble#0.5 0.6152 0.01
avgreverselogprob-ensemble#0.5 0.5963 0.01
executability-sum_logprob 0.5976 0.01
executability-avg_logprob 0.6049 0.01
executability-avg_reverse_logprob 0.5952 0.01
executability-random 0.5881 0.01
executability-sumreverselogprob-ensemble#0.5 0.6440 0.01
executability-avgreverselogprob-ensemble#0.5 0.6159 0.01
mbr_exec 0.6389 0.01
oracle 0.7891 0.01

Collecting Samples

  1. the below example command collects 125 (5x25) samples for zeroshot humaneval with codex002. explore collect*.py for collecting samples on other datasets. These scripts collect programs given the language instructions, i.e., implementing the Coder model.
python collect_zeroshot.py --num_samples 5 --num_seeds 25 --dataset codet_humaneval collect --output-path samples/codex002 --engine-name codex002 --temperature 0.4 --split test --n-procs 1 --batch-size 20 --mode sample --n-prompts 0
  1. We collect the reviewer model p(instruction|generated program) by fewshot_reviewer.py and zeroshot_reviewer.py. Here's an example command for humaneval with codex002,
python zeroshot_reviewer.py --num_procs 1 --batch_size 20 --temperature 0.4 --num_samples 5 --split test --dataset codet_humaneval --model codex002 --data_path samples/codex002 --canonicalize --clean-print

This code will update the cached results with the reviewer model probability. Explore other arguments to run for different models and datasets.

Authors

Acknowledgement

This codebase is largely adapted from MBR-Exec.

License

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

CC BY-NC 4.0

Citation

If you find our work helpful, please cite as

@article{Zhang2022Coder,
  title={Coder Reviewer Reranking for Code Generation},
  author={Tianyi Zhang and Tao Yu and Tatsunori B. Hashimoto and Mike Lewis and Wen-tau Yih and Daniel Fried and Sida I. Wang},
  journal={ArXiv},
  year={2022},
  volume={abs/}
}

coder_reviewer_reranking's People

Contributors

sidaw avatar tiiiger avatar

Stargazers

 avatar st01cs avatar  avatar Kutori avatar Can Jin avatar Hashem Elezabi avatar  avatar  avatar  avatar Kamel Alrashedy avatar Marc Laventure avatar Andreas Chandra avatar 爱可可-爱生活 avatar  avatar  avatar  avatar Nico Müller avatar Jiang Xue avatar raph avatar Alex avatar Bruno Henrique avatar  avatar Chujo Hiroto avatar STYLIANOS IORDANIS avatar Guillaume D avatar Alex Movila avatar Mahmoud Soliman avatar Kalpesh Krishna avatar Chris Young avatar  avatar WeiXin avatar Nguyen Duc Nhat Y avatar Asim Hameed Khan avatar  avatar nashid avatar Zhu Shuai avatar Suraj avatar dodola avatar Narasimman avatar NSY avatar  avatar

Watchers

 avatar nashid avatar Arun Sathiya avatar  avatar  avatar

coder_reviewer_reranking's Issues

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.