Giter Club home page Giter Club logo

athletic's People

Contributors

filp avatar gabrielsch avatar henrikbjorn avatar matthimatiker avatar ocramius avatar pborreli avatar polyfractal avatar staabm avatar ulricheckhardt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

athletic's Issues

[Feature request] tests comparison

Hi!

I've been looking into benchmarking test suites to integrate into ProxyManager (see Ocramius/ProxyManager#62) and I think athletic is the best one I've found so far in terms of code organization.

There's a couple of things that I'd like to ask as clarification and/or new feature (if possible).

  • Does athletic profile memory usage? I couldn't find anything related to that in method results
  • Would it be possible to group method results (with something like @group some-test and @baseLine, for example) and then group results and compare them Like jeremyFreeAgent/Clutch does? (note: Clutch is un-suited for this stuff because of the introduced xhprof overhead)

I will try integrating athletic as it is right now, and eventually try to patch together a result printer, but I hope these features can eventually be put into discussion.

I generally LOVE how athletic keeps tests organized, but Clutch is generally better in terms of details/tests comparison. I'd actually suggest integrating Clutch itself and open an issue there to ask if xhprof could be a problem.

Refactored to support dynamic/multiple benchmarks

Today I refactored some of this package to support:

  • multiple benchmarking classes, such as one for time and one for memory.
  • custom benchmarking classes. This lets you benchmark almost anything, from network latency to database IOPS.
  • show/hide specific columns from the result output.

This required the refactor of these files:

  • AthleticEvent
  • DefaultFormatter
  • MethodResults
  • A new directory: Benchmarkers
    • BenchmarkerInterface
    • TimeBenchmarker
    • MemoryBenchmarker

See my commits here: https://github.com/oytuntez/athletic/commits/master.

I didn't issue a PR as couldn't know how to organize the changes into tags/branches.

Athletic events cannot inherit from classes not including `AthleticEvent` in the name

The current implementation of Athletic\Discovery\Parser does not allow discovered events to inherit from classes that do not contain AthleticEvent in their name.

I'm not sure why this limitation is in place, but the parser looks quite simplistic, and honestly I don't have a quickfix for it right now.

The idea is to use a more sophisticated parser, but that would of course mean having a much heavier logic.

Is this intended or bogus behaviour?

Setup instructions are incomplete

Hi!
I'm trying to evaluate athletic and I'm having issues getting even a simple example to run. The "sample event" from the readme is obviously not intended to be used as-is, so I

  • created a folder
  • composer init
  • composer require polyfractal/athletic
  • created a simple file SleepEvent.php with a class SleepEvent deriving from AthleticEvent

I'm trying to run using "vendor/bin/athletic -p ./SleepingEvent.php", which fails with "ERROR: Class \SleepingEvent does not exist". I've tried various other combinations, but to no avail. The class name in the error message changes slightly, but that's all. I'm using PHP 5.6, in case that matters.

Set desired running time

Some annotations that one could set to a "desired running time" would be nice. E.g.

"this" method should preferably run below "0.0002" at average

This way, the benchmarking could also lead to some form of tests. We get a quick feedback of methods not running in the preferred time window.

<?php
class TestEvent {
/**
 * @iterations 1000
 * @maxTime 0.05
 * @avgTime 0.002
 */
}

This would translate as:

the method should run 1000 iterations, not once allowed above the maximum time limit, and the average time should be less than 0.002

Progress bar in CLI output

The terminal should update its progress, letting us know what the status is. Really useful when you have lots of benchmarks, each taking quite a bit of time.

Memory

If you have a lot of iterations memory skyrockets when used. Maybe results could be temp saved in a file somewhere when doing the calculations?

Exceptions are displayed without stack trace

Hi there,

Is there any way to have athletic display more information when an exception occurs. It only displays the exception message, but sometimes this isn't enough :(

For example here, I'm benchmarking a lib which throws me an exception (with a very poor message) but then I'm stuck for resolving it:

% php -n vendor/bin/athletic -p benchmarks -b vendor/autoload.php
ERROR: Benchmark\Fixture\Foo

Test Grouping

Test grouping allows you to group tests inside of an Event and compare against an optional baseline value. This feature introduces two new annotations:

  • @group groupName specifies the name of the group that this particular test belongs to. All tests with this group name will be compared against each other and displayed together in the results
  • @baseline specifies which test should be the baseline value which other tests are compared against. Each group may have only one baseline.

An example Event setup:

/**
 * @iterations 100
 * @group small
 * @baseline
 */
public function slowIndexingAlgoSmall()
{}

/**
 * @iterations 100
 * @group small
 */
public function fastIndexingAlgoSmall()
{}

/**
 * @iterations 100
 * @group large
 * @baseline
 */
public function slowIndexingAlgoLarge()
{}

/**
 * @iterations 100
 * @group large
 */
public function fastIndexingAlgoLarge()
{}

Presently, the default formatter does not support grouping. To display grouping appropriately, you must use the new GroupedFormatter. To do this, pass an additional -f or --formatter flag on the command line:

./vendor/bin/athletic -b <path_to_benchmark> -f GroupedFormatter

Results will then look something like this:

Elasticsearch\Benchmarks\IndexingEvent
  small
    Method Name                                 Iterations    Average Time      Ops/s    Relative
    ------------------------------  ----------  ------------ --------------   ---------  ---------
    slowIndexingAlgoSmall          : [Baseline] [100       ] [0.0059354090691] [168.48038]
    fastIndexingAlgoSmall          :            [100       ] [0.0013290691376] [752.40631] [22.39%]

  large
    Method Name                                 Iterations    Average Time      Ops/s    Relative
    ------------------------------  ----------  ------------ --------------   ---------  ---------
    slowIndexingAlgoLarge          : [Baseline] [100       ] [0.0022656822205] [441.36816]
    fastIndexingAlgoLarge          :            [100       ] [0.0032706809044] [305.74673] [144.36%]

RFC - Improving Pull Request Process via automated PHPCS checks

I run http://stickler-ci.com which is a service aimed at improving code quality by simplifying code review by automating code style feedback in pull requests. As a fellow open source maintainer I found I was spending a significant amount of time giving feedback to contributors on how code should be formatted, and thought there had to be a better way.

Sticker-CI is my attempt at building a better way. By making code style errors as pull request comments and a build status it is easy for both maintainers and contributors to know when a pull request matches a project's style.

I wanted to know if you were interested in trying out stickler-ci. If you do, I can submit a pull request with the configuration file, but a maintainer will need to enable webhooks by logging into https://stickler-ci.com and enabling the webhook.

`setUp` is only called before an entire chunk of iterations

I've just found out that setUp is called only before the iterations start. Here's the pseudo-code of the currently implemented logic:

$this->setUp();

for($i = 0; $i < $iterations; $i += 1) {
    $this->benchmarkMethod();
}

This is the correct logic in my opinion:

for($i = 0; $i < $iterations; $i += 1) {
    $this->setUp();
    $this->benchmarkMethod();
}

Add warm up iterations

I'm not sure if this is really necessary in vanilla PHP, at least it won't hurt. HHVM uses a JIT and would benefit from a warm up phase.

Change private methods to protected

The AthleticEvent should change its private methods to protected, or:

  • add arguments $currentMethod, $currentIteration and $totalIterations to setUp and tearDown
  • add callbacks for before a method is ran, and after, setUpMethod, tearDownMethod (with argument $currentMethod)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.