Giter Club home page Giter Club logo

metamath's Introduction

MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models

Code License Model Weight License Python 3.9+

🤗 HF Repo • 📃 [MetaMath]

MetaMath

News

  • 🔥 Our MetaMath-Llemma-7B model achieves 30.0 pass@1 on the MATH Benchmarks, surpassing all the SOTA open-source LLM in 7B-13B scales! All the training scripts and the model are opened.
  • 🔥 Our MetaMath-Mistral-7B model achieves 77.7 pass@1 on the GSM8k Benchmarks, surpassing all the SOTA open-source LLM! All the training scripts and the model are opened.
  • 🔥 The full MetaMathQA dataset is now released in the huggingface MetaMathQA!
  • 🔥 We released the GSM8K_Backward dataset is also released in the huggingface GSM8K_Backward to evaluate the reversal mathematical reasoning ability!
  • 🔥 Although the data augmentation for MetaMathQA is sourced from ChatGPT 3.5, Our MetaMath-70B model outperforms the closed-source LLMs ChatGPT 3.5 on the GSM8K!
  • 🔥 Our MetaMath-7B model achieves 66.5 pass@1 on the GSM8k Benchmarks, 11.6 points higher than the SOTA open-source LLM!
  • 🔥 Our MetaMath-7B model achieves 19.8 pass@1 on the MATH Benchmarks, 9.1 points higher than the SOTA open-source LLM!
Model Checkpoint Paper GSM8k MATH License
MetaMath-70B-V1.0 🤗 HF Link 📃 [MetaMath] 82.3 26.6 Llama 2
MetaMath-13B-V1.0 🤗 HF Link 📃 [MetaMath] 72.3 22.4 Llama 2
MetaMath-7B-V1.0 🤗 HF Link 📃 [MetaMath] 66.5 19.8 Llama 2
MetaMath-Mistral-7B 🤗 HF Link 📃 [MetaMath] 77.7 28.2 Apache License 2.0
MetaMath-Llemma-7B 🤗 HF Link 📃 [MetaMath] 69.2 30.0 Apache License 2.0

Comparing MetaMath with the LLM models.

🔥 Comprehensive Results

Model GSM8k Pass@1 MATH Pass@1
MPT-7B 6.8 3.0
Falcon-7B 6.8 2.3
LLaMA-1-7B 11.0 2.9
LLaMA-2-7B 14.6 2.5
MPT-30B 15.2 3.1
LLaMA-1-13B 17.8 3.9
GPT-Neo-2.7B 19.5 --
Falcon-40B 19.6 2.5
Baichuan-chat-13B 23.9 --
Vicuna-v1.3-13B 27.6 --
LLaMA-2-13B 28.7 3.9
InternLM-7B 31.2 --
ChatGLM-2-6B 32.4 --
GPT-J-6B 34.9 --
LLaMA-1-33B 35.6 3.9
LLaMA-2-34B 42.2 6.24
RFT-7B 50.3 --
LLaMA-1-65B 50.9 10.6
Qwen-7B 51.6 --
WizardMath-7B 54.9 10.7
LLaMA-2-70B 56.8 13.5
WizardMath-13B 63.9 14.0
🔥 MetaMath-7B 66.5 19.8
🔥 MetaMath-13B 72.3 22.4
🔥 MetaMath-Mistral-7B 77.7 28.2
🔥 MetaMath-Llemma-7B 69.2 30.0
WizardMath-70B 81.6 22.7
🔥 MetaMath-70B 82.3 26.6

Quick Start

Clone Metamath and install the required packages:

git clone https://github.com/meta-math/MetaMath.git
cd MetaMath
pip install -r requirements.txt

If you encounter a Ray installation problem, please run:

pip install --upgrade ray
pip install --upgrade pyarrow
pip install pandas

Dataset Usage

Run the following command to load the data:

from datasets import load_dataset
dataset = load_dataset("meta-math/MetaMathQA")

Training

you need to prepare the llama-2 base model and our MetaMathQA dataset huggingface MetaMathQA

bash run.sh

or

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 -m torch.distributed.launch --master_addr ${MASTER_ADDR} --master_port ${MASTER_PORT} --nproc_per_node=8 --use_env train_math.py \
    --model_name_or_path "meta-llama/Llama-2-7b-hf" \
    --data_path "path/to/metamathqa" \
    --data_length 10000000 \
    --bf16 True \
    --output_dir "path/to/save" \
    --num_train_epochs 3 \
    --per_device_train_batch_size 4 \
    --per_device_eval_batch_size 4 \
    --gradient_accumulation_steps 4 \
    --evaluation_strategy "no" \
    --save_strategy "steps" \
    --save_steps 1000 \
    --save_total_limit 2 \
    --learning_rate 2e-5 \
    --weight_decay 0. \
    --warmup_ratio 0.03 \
    --lr_scheduler_type "cosine" \
    --logging_steps 1 \
    --fsdp "full_shard auto_wrap" \
    --fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \
    --tf32 True

Supervised fine-tuning

We supervised fine-tune MetaMath-7B with the following hyperparameters:

Hyperparameter LLaMA 2 7B
Batch size 128
Learning rate 2e-5
Epochs 3
Max length 512
LR scheduler cosine

Evaluation

we use the vllm to help the fast generation:

python eval_gsm8k.py --model "path/to/save" --data_file ./data/test/GSM8K_test.jsonl
python eval_math.py --model "path/to/save" --data_file ./data/test/MATH_test.jsonl

where the "path/to/save" should be replaced by the finetuned model, you can also download our series of MetaMath models in huggingface:
🤗 MetaMath 7B 🤗 MetaMath 13B 🤗 MetaMath 70B

The inference prompt for our MetaMath is:

"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response: Let's think step by step."

Thanks for the open source code of WizardMath and RFT. Some of our codes are based on them.

Citation

Please cite the paper if you refer to our model, code, data or paper from MetaMath.
@article{yu2023metamath,
  title={MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models},
  author={Yu, Longhui and Jiang, Weisen and Shi, Han and Yu, Jincheng and Liu, Zhengying and Zhang, Yu and Kwok, James T and Li, Zhenguo and Weller, Adrian and Liu, Weiyang},
  journal={arXiv preprint arXiv:2309.12284},
  year={2023}
}

metamath's People

Contributors

ws-jiang avatar yujincheng08 avatar yulonghui avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

metamath's Issues

path/to/llama-2

what does --model_name_or_path "path/to/llama-2" mean?
to run the train_math.py , which model should I download to the above path?

RuntimeError: probability tensor contains either `inf`, `nan` or element < 0 when setting max_new_tokens

I found this bug with following reproduction.

import torch
import sys
import random
import numpy as np
from transformers import LlamaTokenizer, LlamaForCausalLM, BitsAndBytesConfig, GenerationConfig
bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_use_double_quant=True,
    bnb_4bit_quant_type="nf4",
    # bnb_4bit_quant_type="fp4",
    bnb_4bit_compute_dtype=torch.bfloat16
)
random.seed(0)
np.random.seed(0)
torch.manual_seed(0)
torch.cuda.manual_seed(0)
torch.backends.cudnn.deterministic = True
device = "cuda:0"
tokenizer = LlamaTokenizer.from_pretrained("MetaMath-7B-V1.0",legacy=False)
model = LlamaForCausalLM.from_pretrained(
        "MetaMath-7B-V1.0",
        quantization_config=bnb_config,
        device_map="auto",
    )
model.config.pad_token_id = tokenizer.pad_token_id = 0  # unk
model.config.bos_token_id = 1
model.config.eos_token_id = 2
generation_config = GenerationConfig(
                temperature=0.8,
                max_new_tokens=512,###here is the problem
                do_sample=True,
                top_p=0.95,
                early_stopping=True,
            )
model.config.pad_token_id = tokenizer.pad_token_id = 0  # unk
model.config.bos_token_id = 1
model.config.eos_token_id = 2
eos_token_id = -100
input = "Her eyes are beautiful."
tokens = tokenizer([input]*10, return_tensors='pt', padding=True).to(device)
with torch.inference_mode():
    output = model.generate(**tokens, generation_config=generation_config, return_dict_in_generate=True)
decoded = tokenizer.batch_decode(output.sequences, skip_special_tokens=True)
print(decoded)

when setting the max_new_tokens I will get the tensor error, comment it would be fine. Could you please check that? My transformer version is 4.33.3

Dataset

The preprint states that you "release the MetaMathQA dataset". However, the huggingface dataset is empty, nor is the data in this repository.

will there be ablation studies?

Thanks for giving prompts and datasets so anyone can reproduce your experiments. However have you ever thought about doing ablation study and analysis on the effects by different tasks and finding out why they can improve the performance?
SkyMath mentioned your work but didn't provide any details (https://github.com/SkyworkAI/Skywork). Somehow their grades suggest there may be more efficient methods to produce higher quality datasets.

Dataset generation script

Could you please publish the dataset generation script? This will ensure reproducibility and make a good contribution to the open-source LLM community.

OOM

I'm wondering as to why running the training script gives me OOM error constantly.
I'm following the exact sh file format, and
I'm using 4 x A100 80GB, so I believe there should be no problem... do you have any idea why?

MetaMath-Mistral-7B gsm8k/math acc different from the reported values

I tried run_mistral.sh and get:
gsm8k acc==== 0.7376800606520091
MATH acc==== 0.2726

I also tried

export HF_SAVE_PATH="meta-math/MetaMath-Mistral-7B" && \
python eval_gsm8k.py --model $HF_SAVE_PATH --data_file ./data/test/GSM8K_test.jsonl && \
python eval_math.py --model $HF_SAVE_PATH --data_file ./data/test/MATH_test.jsonl

and get:
gsm8k acc==== 0.7710386656557998
MATH acc==== 0.278

which is also a bit different from the reported 77.7 and 28.2.

I would like to know your opinion on if this is normal and what might be the cause. Thanks!

Questions about MetaMATH dataset

Thank you for your excellent job.

  1. Hello, I would like to know whether the MATH and GSM8K related data in MetaMATH need to be separated during training. The MATH related data will be used for MATH test set evaluation after training, and the GSM8K related data will be used for GSM8K evaluation after training?
  2. Furthermore, what is the format of the data input for your code? Can you give an example?

Problem regarding the evaluation codes of the GSM8K - `eval_gsm8k.py`

In line 87, batch data of the prompts is implemented. However, in line 95, the answer labels are not used as batches. If you output

for idx, (prompt, prompt_answer) in enumerate(zip(batch_gsm8k_ins, gsm8k_answers)):
    print(prompt, prompt_answer)
    if isinstance(prompt, list):
            pass
        else:
            prompt = [prompt]
    XXXXXXX

you would find that the prompt is not corresponding to the answers.

Potential error in eval_gsm8k.py

Dear authors, thank you for the amazing work and sharing your code and data!

I wanted to ask about your evaluation code, as currently if the model outputs an answer with decimal point, it automatically rounds to the nearest integer.

In this way, a wrong answer (i.e. 8.5) could be considered correct (i.e. as 9), in spite of a calculation error, which indeed often occurs with some model generations.

In this light, I believe a stricter evaluation code may be needed.

License

Hi!

Thanks a lot for releasing data and code!

Could you add a license for both so that this can be used by industry labs.

eval_math and eval_gsm8k

Hi! I am wondering how do you control the LLM's output if you don't explicitly tell it to output the answer with the format "The answer is: "? (as written in the function process_results()) I didn't see such prompt in your provided code. I ran your code without any modifications, and the LLM does not output the answer with "The answer is", making the result unjudgable. Thank you !

BTW, could you please also provide few shot examples for eval_math and eval_gsm8k, if they exist? Thanks!

the few-shot in-context learning issues.

I just want to test the performance of the few-shot in-context learning capability. But I found an issue. I added the Instruction and response few-shot examples before the question and the generated result after the llm.generate function remains the same. And no matter how many examples I added, the inferenced results remain the same as the zero-shot result. So could you help me with the issues?

eval_math script outputs 0 accuracy

When I run

python eval_math.py --model meta-math/MetaMath-7B-V1.0 --data_file data/test/MATH_test.jsonl --tensor_parallel_size 1

from the base directory of this repository, the final output is

start=== 0 , end==== 9223372036854775807
length==== 5000 , acc==== 0.0

I ran inference on a 1x A100 40GB. I am using vllm v0.1.y, transformers 4.33.2, and torch 2.0.1.

Issue with Fine-tuning Mistral 7B Model - Results Discrepancy

Hello,
I attempt to replicate the experiment using metamathQA dataset to finetune mistral-7b, but the results I obtained do not match the ones shared in the repository.

Reproduction steps

I used the following parameters in run_mistral.sh.

export MODEL_PATH='mistralai/Mistral-7B-v0.1'
export SAVE_PATH='0224_mistral-7b-metamath395'
export MASTER_ADDR="localhost"
export MASTER_PORT="1231"
export GLOO_SOCKET_IFNAME="lo"
export NCCL_SOCKET_IFNAME="lo"
export WANDB_DISABLED=true
export HF_TOKEN="token of your huggingface"
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 -m torch.distributed.launch --master_addr ${MASTER_ADDR} --master_port ${MASTER_PORT} --nproc_per_node=8 --use_env train_math.py \
    --model_name_or_path $MODEL_PATH \
    --data_path MetaMathQA-395K.json \
    --data_length 10000000 \
    --bf16 True \
    --output_dir $SAVE_PATH \
    --num_train_epochs 3 \
    --per_device_train_batch_size 2 \
    --per_device_eval_batch_size 2 \
    --gradient_accumulation_steps 8 \
    --evaluation_strategy "no" \
    --save_strategy "steps" \
    --save_steps 100000 \
    --save_total_limit 0 \
    --learning_rate 5e-6 \
    --weight_decay 0. \
    --warmup_ratio 0.03 \
    --lr_scheduler_type "cosine" \
    --logging_steps 1 \
    --fsdp "full_shard auto_wrap" \
    --fsdp_transformer_layer_cls_to_wrap 'MistralDecoderLayer' \
    --tf32 True
python eval_gsm8k.py --model $SAVE_PATH --data_file ./data/test/GSM8K_test.jsonl
python eval_math.py --model $SAVE_PATH --data_file ./data/test/MATH_test.jsonl

and I get
gsm8k acc==== 0.6618650492797574
math acc==== 0.2274

which is different from the reported 77.7 and 28.2

Environment details

Here are the detailed of my Python environment:

transformers==4.34.0
wandb==0.15.3
torch==2.0.1
sentencepiece==0.1.99
tokenizers==0.14
accelerate==0.21.0
bitsandbytes==0.40.0

I would appreciate any guidance or suggestions you could provide to help resolve this discrepancy. Thank you in advance for your time and assistance.

Best regards,
lyf-00

Transformers and Tokenizers version conflict.

I try:
pip install -r requirements.txt
and get the following error:
cannot install pip install -r requirements.txt (line 1) and tokenizers==0.13.3 because these packages version have conflicting dependency
How do I fix the error

How many tokens did MetaMath train on?

Did you use the full 4K context length of llama for training for each sample?

I see you have 395K examples and used 4K llama2, so an upper bound is 4K * 395k. Is it possible to get a more precise number on the number of tokens trained on?

The accuracy of MATH test set trained on 7B is not consistent with paper.

Hello,Using MetaMATH dataset and codes, I reproduce the experiments on 7B base model. Howerver, The accuracy I get is as follows, 17.14%, which is not consistent with yours 19.8%. Is there anything wrong with your results in the paper or anything wrong with my experiments? Can you help me? Thank you.

start=== 0 , end==== 9223372036854775807
length==== 5000 , acc==== 0.1714

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.