Giter Club home page Giter Club logo

competitive-coding-benchmark's Introduction

Competitive Coding Benchmark

This benchmark aims to evaluate the performance of different LLMs against high-quality representative competitive coding problems sourced from cses.fi. These problems span a wide range of categories that require creative thinking and problem-solving ability.

Just wanna see the results: Skip to Benchmark Results

The code is designed to allow easy addition of new providers. Take a look at the providers folder to get inspiration from the currently implemented providers: Anthropic, Google, Huggingface (not tested), Mistral, and OpenAI are currently supported. It should also be easy to add new judges; currently supported are Python, C++, and Java (experimental), as they represent the most commonly used languages in competitive coding.

WARNING: This repo is currently in a toy status. Use at your own risk.

How to run a benchmark

Benchmarks can be defined in the root folder. You can define config.json with the following parameters.

{
    "ignore_time_limits": false, 
    "provider": "openai", 
    "api_key": "<API_KEY>", 
    "model": "gpt-4o",
    "base_prompt": "You are a world-class competitive programmer. Go step by step through the following problem. At the end, provide the entire solution in a markdown cpp block.",
    "categories": ["String Algorithms", "Sorting and Searching", "Advanced Techniques", "Additional Problems", "Dynamic Programming", "Graph Algorithms", "Introductory Problems", "Tree Algorithms", "Range Queries", "Mathematics", "Geometry"],
    "shots": 3,
    "language": "cpp",
    "continue_from_log": "benchmark/openai_gpt-4o_log.json" 
}

categories can be left empty [] to run the benchmark on all categories. The continue_from_log parameter is necessary if you want to continue a benchmark that was abruptly stopped. Leave this empty if you are starting a new benchmark. The rest should be quite self-explanatory. The benchmark results can be found in the benchmark folder with the following format <provider>_<model>_<language>_log.json.

Once you have defined your config.json, you can start running the benchmark with:

python -m venv competitive-coding-bench
source competitive-coding-bench/bin/activate
pip install -r requirements.txt

python3 main.py

The output will look something like this for each problem. More ways to evaluate the data need to be explored. Below you will find some initial evaluations that will allow you to get a quick grasp of some mainstream SOTA models.

 {
    "title": "Polygon Area",
    "category": "Geometry",
    "solution": "#include <iostream>\n#include <vector>\n#include <cmath>\n\nusing namespace std;\n\nint main() {\n    int n;\n    cin >> n;\n\n    vector<long long> x(n), y(n);\n    for (int i = 0; i < n; i++) {\n        cin >> x[i] >> y[i];\n    }\n\n    long long area = 0;\n    for (int i = 0; i < n; i++) {\n        int j = (i + 1) % n;\n        area += x[i] * y[j] - x[j] * y[i];\n    }\n\n    area = abs(area);\n\n    cout << area << endl;\n\n    return 0;\n}\n",
    "passed_test_cases": 5,
    "total_test_cases": 5,
    "mean_time_taken": 0.08634872436523437,
    "mean_memory_used": 0.0,
    "failure_reasons": [],
    "failed_test_cases": [],
    "exceeded_time_count": 0,
    "exceeded_memory_count": 0,
    "total_problems_passed": 72,
    "passed": true,
    "shots_info": {
        "shot": 1,
        "status": "passed"
    }
 }

Results Benchmark

Threshold is defined as the ratio of passed test cases to the total number of test cases for each problem. A problem is considered passed if the ratio meets or exceeds the specified threshold. For example, with a threshold of 0.5, a problem is considered passed if at least 9/16 Test-Cases are successful.

Overall Performance Overall Performance After Thresholds Performance Across Categories Threshold 1 Shot 0 Performance Across Categories Threshold 0.9 Shot 0 Performance Across Categories Threshold 0.5 Shot 0

Known Limitations

  • As of now, the data might be contaminated since it may have seen solutions to some of these problems.
  • Additional problems should be further specified into subcategories as it is very large.
  • Better analysis of the data.
  • Better Visualizations

Citation

If you use this codebase, or otherwise find this work valuable, please cite Competitive-Coding-Benchmark:

@misc{competitive_coding_benchmark,
  title={Competitive-Coding-Benchmark},
  author={juvi21},
  journal={GitHub repository},
  year={2024},
  howpublished={\url{https://github.com/juvi21/Competitive-Coding-Benchmark/}}
}

Pull requests are encouraged : )

competitive-coding-benchmark's People

Contributors

juvi21 avatar

Stargazers

Tim Kersey avatar

Watchers

Kostas Georgiou avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.