Giter Club home page Giter Club logo

mmlu-pro's Introduction

MMLU-Pro

|πŸ€— Dataset | πŸ†Leaderboard | πŸ“– Paper |

This repo contains the evaluation code for the paper "MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark"

Introduction

We introduce MMLU-Pro, an enhanced benchmark designed to evaluate language understanding models across broader and more challenging tasks. Building on the Massive Multitask Language Understanding (MMLU) dataset, MMLU-Pro integrates more challenging, reasoning-focused questions and increases the answer choices per question from four to ten, significantly raising the difficulty and reducing the chance of success through random guessing. MMLU-Pro comprises over 12,000 rigorously curated questions from academic exams and textbooks, spanning 14 diverse domains including Biology, Business, Chemistry, Computer Science, Economics, Engineering, Health, History, Law, Math, Philosophy, Physics, Psychology, and Others.

Our experimental results show that MMLU-Pro not only raises the challenge, causing a significant drop in accuracy by 16% to 33% compared to MMLU but also demonstrates greater stability under varying prompts. With 24 different prompt styles tested, the sensitivity of model scores to prompt variations decreased from 4-5% in MMLU to just 2% in MMLU-Pro. Additionally, we found that models utilizing Chain of Thought (CoT) reasoning achieved better performance on MMLU-Pro compared to direct answering, which starkly contrasts the findings on the original MMLU, indicating that MMLU-Pro includes more complex reasoning questions.

abs

Dataset Creation

MMLU-Pro was created to provide language models with a more challenging and robust benchmark, pushing the boundaries of what these models can achieve in terms of expert-level knowledge and reasoning. Please refer to our huggingface πŸ€— Dataset for more details.

Evaluation

To run local inference, modify the model name in the following script and execute it:

cd scripts/examples/
sh eval_llama_2_7b.sh

To use the API for inference, modify the API KEY in evaluate_from_api.py script and execute the bash script:

cd scripts/examples/
sh eval_gpt_4.sh

πŸ† Mini-Leaderboard

Model Overall Accuracy
Claude-3.5-Sonnet 76.12
GPT-4o 72.55
Gemini-1.5-Pro 69.03
Claude-3-Opus 68.45
GPT-4-Turbo 63.71
Gemini-1.5-Flash 59.12
Yi-large 57.53
Claude-3-Sonnet 56.80
Llama-3-70B-Instruct 56.20
Phi3-medium-4k 55.70
Deepseek-V2-Chat 54.81
Phi-3-medium-4k-instruct 53.48
Llama-3-70B 52.78
Qwen1.5-72B-Chat 52.64
Yi-1.5-34B-Chat 52.29
Phi3-medium-128k 51.91
MAmmoTH2-8x7B-Plus 50.40

For more details on various models and their accuracy across different subjects, please visit our Leaderboard.

Benchmarking Answer Extraction

We provide different alternatives to do answer extraction. We found that different answer extraction mechanisms have minor impact on the results.

python compute_accuracy.py results/llama-3-8b-quantized/CoT/all/

Thanks to @chibop1 for evaluating the robustness of MMLU-Pro across all the different answer extraction strategies and temperature. A detailed discussion is posted at Reddit.

Contact

Citation

BibTeX:

@misc{wang2024mmlupro,
      title={MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark}, 
      author={Yubo Wang and Xueguang Ma and Ge Zhang and Yuansheng Ni and Abhranil Chandra and Shiguang Guo and Weiming Ren and Aaran Arulraj and Xuan He and Ziyan Jiang and Tianle Li and Max Ku and Kai Wang and Alex Zhuang and Rongqi Fan and Xiang Yue and Wenhu Chen},
      year={2024},
      eprint={2406.01574},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

mmlu-pro's People

Contributors

chigkim avatar eadst avatar mxueguang avatar wenhuchen avatar wyyyb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

mmlu-pro's Issues

Chat template for instruct models for local eval

Hi, thanks for open sourcing the dataset. Here in evaluate_from_local.py few shot prompt is created as a single string and is directly tokenized. But HF tokenizer has a chat_template as demonstrated in LLama3-70B HF Readme , where we can use system->user->system type chat to create the few shot prompt. Is there any reason why this is not used? Do you know the difference in final metric with and without the chap template? Thanks.

Regex pattern in extract_final function.

I noticed that evaluate_from_local.py is updated with extract_final function.

    pattern = r"[A-J](?=[^A-J]*$)"
    match = re.search(pattern, text)

Wouldn't the regex pattern take any last capital letter A-J in a response as an answer? For example, if a response says "..... A perfect answer cannot be found." Then it'd extract A as an answer because the that's the last a capital letter between A-J in the response. Isn't it highly likely that every response has at least one capital letter between A-J somewhere?
When I tested a model with this regex in the last extraction chain, it never triggers x = random.randint(0, len(each["options"]) - 1).

Duplicates in test split

Hello, can you help me? There are 159 questions with duplicates in the test part. Here is the code to find duplicates:

from collections import defaultdict
import datasets

test = datasets.load_dataset("TIGER-Lab/MMLU-Pro", split="test")

mapping = defaultdict(int)

for item in test:
    mapping[(item["category"], item["question"], "".join(item["options"]), item["answer"])] += 1

count_doubles = 0
for (category, question, *_), count in mapping.items():
    if count > 1:
        print(category, repr(question))
        count_doubles += 1
print(count_doubles)

Request for Llama3.1 8B, 70B and 405B

Hi!

Title is pretty much explanatory, vLLM now correctly supports the models since 0.5.3post1 and the tokenizer has been fixed

Could the new Llama3.1 models be added to the leaderboard? I could only provide the results for the 8B, I don't have the resources for 70B and 405B unquantized models.

There seems to be another PR pending for the llama3.1 models but it's related to the tool calling so I don't think it would affect the results ( https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-Instruct/discussions/12 )

Thanks!

Support for standard deviation

Hi!

I'm really linking this benchmark and using it for my tests. But I'm noticing that even with temperature set at 0.0, many inference engines are not fully deterministic.

Would it be possible to add an standard deviation to the output to further improve the confidence in the results?

Possible to remove spam model result

Hi!

I don't find any information online about iAsk-Pro. I think this is a scam model. Is it possible for you to remove that from the results? And perhaps also figure out a way to prevent this further on?

Best regards

Different Setup for Different Models?

Hi,

I realized that different scripts have different setup for different models. Wouldn't this lead to inconsistent test results?

Sampling parameters:

  • GPT-4o: temperature=0.1 and top_p=1.0
  • Gemini: temperature=0.0 and top_p=0.95
  • Claude-3: temperature=0.0 and top_p=1.0

System prompt:

  • GPT-4o with OpenAI: You are an knowledge expert, you are supposed to answer the multi-choice question to derive your final answer as The answer is ....
  • GPT-4 with AzureOpenAI: The following are multiple choice questions (with answers) about {subject}. Think step by step and then output the answer in the format of "The answer is (X)" at the end.
  • Gemini: Finish your answer with Final Answer: (X) where X is the correct letter choice. If none or more than one of the options match, choose the one that is the closest.
  • vllm: The following are multiple choice questions (with answers) about {subject}. Think step by step and then finish your answer with "the answer is (X)" where X is the correct letter choice.
  • Claude-3: No system prompt

Regex to extract answers:

  • GPT-4o: single extraction, r"answer is \(?([ABCDEFGHIJ])\)?"
  • gemini: double extractions, r"(Answer:|answer is)\s*\(?([ABCDEFGHIJ])\)?", r' (:|is)\s*\(?([ABCDEFGHIJ])\)?\b'
  • vllm: double extractions, r"answer is \(?([ABCDEFGHIJ])\)?", r'.*[aA]nswer:\s*([A-J])'
  • GPT-4 with AzureOpenAI: triple extractions, r"answer is \(?([A-J])\)?", r'.*[aA]nswer:\s*([A-J])', r"[A-J](?=[^A-J]*$)"

Are these scripts on this repo the same scripts to produce the result on the MMLU Pro paper?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.