Giter Club home page Giter Club logo

benchmarkci.jl's Introduction

BenchmarkCI.jl

Lifecycle Latest documentation CI Status codecov.io

BenchmarkCI.jl provides an easy way to run benchmark suite via GitHub Actions. It only needs a minimal setup if there is a benchmark suite declared by BenchmarkTools.jl / PkgBenchmark.jl API.

Warning This package is still experimental. Make sure to fix the version number in your CI setup.

Setup

BenchmarkCI.jl requires PkgBenchmark.jl to work. See Defining a benchmark suite · PkgBenchmark.jl for more information. BenchmarkCI.jl also requires a Julia project benchmark/Project.toml that is used for running the benchmark.

Create a workflow file (required)

Create (say) .github/workflows/benchmark.yml with the following configuration:

name: Run benchmarks

on:
  pull_request:

jobs:
  Benchmark:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - uses: julia-actions/setup-julia@latest
        with:
          version: 1
      - uses: julia-actions/julia-buildpkg@latest
      - name: Install dependencies
        run: julia -e 'using Pkg; pkg"add PkgBenchmark [email protected]"'
      - name: Run benchmarks
        run: julia -e 'using BenchmarkCI; BenchmarkCI.judge()'
      - name: Post results
        run: julia -e 'using BenchmarkCI; BenchmarkCI.postjudge()'
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

If you don't want to benchmark your code for every push of every PR, then you can conditionally trigger the jobs on a label:

name: Run benchmarks

on:
  pull_request:
    types: [labeled, opened, synchronize, reopened]

# Only trigger the benchmark job when you add `run benchmark` label to the PR
jobs:
  Benchmark:
    runs-on: ubuntu-latest
    if: contains(github.event.pull_request.labels.*.name, 'run benchmark')
    steps:
      - uses: actions/checkout@v2
      - uses: julia-actions/setup-julia@latest
        with:
          version: 1
      - uses: julia-actions/julia-buildpkg@latest
      - name: Install dependencies
        run: julia -e 'using Pkg; pkg"add PkgBenchmark [email protected]"'
      - name: Run benchmarks
        run: julia -e 'using BenchmarkCI; BenchmarkCI.judge()'
      - name: Post results
        run: julia -e 'using BenchmarkCI; BenchmarkCI.postjudge()'
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

Setup with benchmark/Manifest.toml

If benchmark/Manifest.toml is checked into the repository, benchmark/Project.toml must include parent project as well. Run dev .. in benchmark/ directory to add it:

shell> cd ~/.julia/dev/MyProject/

shell> cd benchmark/

(@v1.x) pkg> activate .
Activating environment at `~/.julia/dev/MyProject/benchmark/Project.toml`

(benchmark) pkg> dev ..

Additional setup (recommended)

It is recommended to add following two lines in .gitignore:

/.benchmarkci
/benchmark/*.json

This is useful for running BenchmarkCI locally (see below).

Printing benchmark result (optional)

Posting the benchmark result as a comment for every push for each PR may be too noisy. In such case, using BenchmarkCI.displayjudgement() instead of BenchmarkCI.postjudge() may be useful.

      - name: Print judgement
        run: julia -e 'using BenchmarkCI; BenchmarkCI.displayjudgement()'

Store benchmark result in a Git branch (optional; very experimental)

Alternatively, the benchmark result and report markdown can be pushed to a git branch benchmark-results (example).

      - name: Push results
        run: julia -e "using BenchmarkCI; BenchmarkCI.pushresult()"
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          SSH_KEY: ${{ secrets.DOCUMENTER_KEY }}

This method can also be used in Travis CI. See this example .travis.yml.

WARNING: Storage format may be changed across releases.

Running BenchmarkCI interactively

shell> cd ~/.julia/dev/MyProject/

julia> using BenchmarkCI

julia> BenchmarkCI.judge()
...

julia> BenchmarkCI.displayjudgement()
...

benchmarkci.jl's People

Contributors

dilumaluthge avatar github-actions[bot] avatar johnnychen94 avatar juliatagbot avatar mergify[bot] avatar mtsch avatar tkf avatar tkf-bot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

benchmarkci.jl's Issues

Inplace comment update

@aviatesk composed a solution to update comment in place:

https://github.com/aviatesk/JET.jl/blob/8a458dd8d675c8e26c1149517c8dfbb1d755f01b/.github/workflows/benchmark.yml#L28-L68

Quoting my comment on slack:

I moved to git repository-based output (https://github.com/tkf/BenchmarkCI.jl#store-benchmark-result-in-a-git-branch-optional-very-experimental) so I'm not personally annoyed by the comment spamming anymore. But this is a bit more involved to set up so "inplace" comment updating like yours is a really nice default. I think it should be possible to write up GitHub API-based solution in BenchmarkCI. But I wonder if it is better to just provide the set of actions you wrote up as a composite action? https://docs.github.com/en/free-pro-team@latest/actions/creating-actions/creating-a-composite-run-steps-action

Benchmark without comparing

Not always is a comparing, i.e. judge necessary.
For example, when we just push, without a pull request, there is (probably) nothing really to compare with.
We just want to run the benchmarks, see how we are doing, and maybe push the results in a data repo.

Moreover, when a pull request is merged in origin/master, and judge is triggered it will probably run the same benchmark two times (baseline == target).

If the idea sounds okey, I could make a PR to have in-house support for just running a single benchmark and posting/pushing the results.

Let me know !

More robust Git user configuration

Currently, BenchmarkCI.jl fails in PkgEval with

*** Please tell me who you are.

Run

  git config --global user.email "[email protected]"
  git config --global user.name "Your Name"

to set your account's default identity.
Omit --global to set the identity only in this repository.

fatal: unable to auto-detect email address (got 'pkgeval@875c69d6f5c8.(none)')
updating: Error During Test at /home/pkgeval/.julia/packages/BenchmarkCI/jXf18/test/test_updating.jl:25
  Got exception outside of a @test
  failed process: Process(`git commit --allow-empty-message --message ''`, ProcessExited(128)) [128]
  
  Stacktrace:
    [1] pipeline_error
      @ ./process.jl:525 [inlined]
    [2] run(::Cmd; wait::Bool)
      @ Base ./process.jl:440
    [3] run(::Cmd)
      @ Base ./process.jl:438
    [4] (::BenchmarkCI.GitUtils.var"#7#14"{String, String})()
      @ BenchmarkCI.GitUtils ~/.julia/packages/BenchmarkCI/jXf18/src/gitutils.jl:55
    [5] cd(f::BenchmarkCI.GitUtils.var"#7#14"{String, String}, dir::String)
      @ Base.Filesystem ./file.jl:104
    [6] (::BenchmarkCI.GitUtils.var"#2#8"{Nothing, String, Main.TestBenchmarkCI.TestUpdating.var"#4#9", String, String})(tmpd::String)
      @ BenchmarkCI.GitUtils ~/.julia/packages/BenchmarkCI/jXf18/src/gitutils.jl:48
    [7] mktempdir(fn::BenchmarkCI.GitUtils.var"#2#8"{Nothing, String, Main.TestBenchmarkCI.TestUpdating.var"#4#9", String, String}, parent::String; prefix::String)
      @ Base.Filesystem ./file.jl:709
    [8] mktempdir
      @ ./file.jl:707 [inlined]
    [9] #updating#1
      @ ~/.julia/packages/BenchmarkCI/jXf18/src/gitutils.jl:16 [inlined]
   [10] updating
      @ ~/.julia/packages/BenchmarkCI/jXf18/src/gitutils.jl:16 [inlined]
   [11] (::Main.TestBenchmarkCI.TestUpdating.var"#3#8")(dir::String)
      @ Main.TestBenchmarkCI.TestUpdating ~/.julia/packages/BenchmarkCI/jXf18/test/test_updating.jl:32
   [12] mktempdir(fn::Main.TestBenchmarkCI.TestUpdating.var"#3#8", parent::String; prefix::String)
      @ Base.Filesystem ./file.jl:709
   [13] mktempdir(fn::Function, parent::String)
      @ Base.Filesystem ./file.jl:707
   [14] top-level scope
      @ ~/.julia/packages/BenchmarkCI/jXf18/test/test_updating.jl:26
   [15] top-level scope
      @ /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Test/src/Test.jl:1113
   [16] top-level scope
      @ ~/.julia/packages/BenchmarkCI/jXf18/test/test_updating.jl:26
   [17] include(mod::Module, _path::String)
      @ Base ./Base.jl:376
   [18] include(x::String)
      @ Main.TestBenchmarkCI ~/.julia/packages/BenchmarkCI/jXf18/test/runtests.jl:1
   [19] top-level scope
      @ ~/.julia/packages/BenchmarkCI/jXf18/test/runtests.jl:7
   [20] top-level scope
      @ /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Test/src/Test.jl:1188
   [21] include(fname::String)
      @ Base.MainInclude ./client.jl:444
   [22] top-level scope
      @ none:6
   [23] eval(m::Module, e::Any)
      @ Core ./boot.jl:345
   [24] exec_options(opts::Base.JLOptions)
      @ Base ./client.jl:261
   [25] _start()
      @ Base ./client.jl:485

https://github.com/JuliaCI/NanosoldierReports/blob/843cc0c9eecf24dce965f484077aa7123b39e9eb/pkgeval/by_date/2020-09/09/logs/BenchmarkCI/1.6.0-DEV-08486888ba.log#L1683-L1696

Benchmarking fails when target and baseline have different dependency compat versions

Thanks for your work on so many great packages!

We have a pretty simple setup for BenchmarkCI:

https://github.com/ITensor/ITensors.jl/tree/v0.1.41/benchmark
https://github.com/ITensor/ITensors.jl/blob/v0.1.41/.github/workflows/benchmark.yml

which generally runs fine. However, when we update the compatibility of one of our dependencies (for example in this PR) the baseline benchmark fails with something like:

ERROR: LoadError: Unsatisfiable requirements detected for package NDTensors [23ae76d9]:
[...]

I suppose this is because it is trying to use the same version for the NDTensors dependency for both the target and baseline benchmark, but I don't know enough about BencharmCI.jl/PkgBenchmark.jl to understand why that is happening. Is there a way that I can adjust my configuration to get this case working? I tried explicitly setting the correct compat versions in benchmark/Project.toml for both the master branch and the PR branch but it gives the same error.

BenchmarkCI only works on x86 CPUs

Running BenchmarkCI's judge function calls CpuId.jl.
As noted in its README and discussed here, CpuId.jl (and more generally the CPUID instruction) only support x86 architectures:

Works on Julia 1.0 and later, on Linux, Mac and Windows with Intel CPUs and AMD CPUs. Other processor types like ARM are not supported.

By depending on CpuId.jl, BenchmarkCI therefore also only supports x86 architectures and doesn't work on ARM Macs.

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

Respond gracefully when benchmarks not set up on baseline

Thanks for this useful package! I am currently setting up both PkgBenchmark and BenchmarkCI in the same branch of my repo. As a result, the file benchmark/benchmarks.jl does not exist on the master branch. This causes the baseline run of benchmarks to fail on the pending PR.

I had to dig into the BenchmarkCI source code to figure out what was going wrong. It would be nice if a more informative error, or even just a warning, was raised when the baseline branch is not set up for benchmarks.

Remove the use of Base.libm_name

We'll be removing Base.libm_name in Julia 1.9, and eventually remove OpenLibm from base.

JuliaLang/julia#42299

To continue using OpenLibm, simply use OpenLibm_jll explicitly in your Project.toml, and this recipe should allow you to keep using libm_name as before:

using OpenLibm_jll
libm_name = OpenLibm_jll.libopenlibm

If you are using libm_name mainly for informational purposes, it can be deleted, since Julia uses its native implementation for most libm functions, and will default to system libm when necessary.

Use GitHub.jl?

... though what's done here ATM is simple enough to do with curl.

Unexpected baseline memory allocations

I have a dynamics simulation package (https://github.com/janbruedigam/ConstrainedDynamics.jl) based on StaticArrays for which I'm running benchmark simulations. The main goal for me is to make sure that new commits do not introduce memory allocations as that would drastically reduce performance.

The issue I have is that for a pull request, the target benchmark is correct, but the baseline benchmark is not. For some reason, the baseline benchmark has a large number of allocations even though it did not have these allocations before it was merged into the master. So the judgement tells me the target performs a lot better simply because of the allocations of the baseline, which makes it a problematic comparison.

As an example:

Using BenchmarkCI in Buildkite pipelines

I tried using following script for running benchmarks on buildkite pipline

using Pkg

# Activate `benchmark`
Pkg.activate("benchmark")
Pkg.develop(path=".") # Add parent Project.toml to development env 
Pkg.instantiate()
Pkg.build()
Pkg.precompile()

# Start benchmarking
using BenchmarkCI
BenchmarkCI.judge()
BenchmarkCI.postjudge()

But it errors with:

Activating environment at `/<buildkite-path>benchmark/Project.toml`
:
Installed Parsers ──────────────────── v2.1.2
:
✓ Parsers
:
 15 dependencies successfully precompiled in 12 seconds (58 already precompiled)
:
[ Info: Using existing manifest file.
--
  | PkgBenchmark: Running benchmarks...
  | Activating environment at `<buildkite-path>/benchmark/Project.toml`
  | PkgBenchmark: creating benchmark tuning file /<buildkite-path>/benchmark/tune.json..
(1/3) tuning "new"...
:
:
PkgBenchmark: benchmark results written to .benchmarkci/result-target.json

  | PkgBenchmark: Running benchmarks...
  | ERROR: LoadError: LoadError: ArgumentError: Package Parsers [69de0a69-1ddd-5017-9359-2bf0b02dc9f0] is required but does not seem to be installed:
  | - Run `Pkg.instantiate()` to install all recorded dependencies.

( : to be replaced with entire package list)

The issue isn't specific to Parsers, but its just something it is trying to find first.


  • BenchmarkCI does an impressive job with Github Actions.
  • If this could be reused with Buildkite runners (which is widely being adopted), it would be great.
  • While here, postjudge() is something to inspect if it can post even from a buildkite pipeline

New repositories use `main` instead of `master`

Is it possible to optionally to another branch than master?

julia> BenchmarkCI.judge()
fatal: couldn't find remote ref refs/heads/master
ERROR: failed process: Process(`git fetch origin +refs/heads/master:refs/remotes/origin/master`, ProcessExited(128)) [128]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.