Giter Club home page Giter Club logo

pytest-rerunfailures's Introduction

pytest-rerunfailures

pytest-rerunfailures is a plugin for pytest that re-runs tests to eliminate intermittent failures.

License PyPI GitHub Actions

Requirements

You will need the following prerequisites in order to use pytest-rerunfailures:

  • Python 3.8+ or PyPy3
  • pytest 7.2 or newer

This plugin can recover from a hard crash with the following optional prerequisites:

  • pytest-xdist 2.3.0 or newer

This package is currently tested against the last 5 minor pytest releases. In case you work with an older version of pytest you should consider updating or use one of the earlier versions of this package.

Installation

To install pytest-rerunfailures:

$ pip install pytest-rerunfailures

Recover from hard crashes

If one or more tests trigger a hard crash (for example: segfault), this plugin will ordinarily be unable to rerun the test. However, if a compatible version of pytest-xdist is installed, and the tests are run within pytest-xdist using the -n flag, this plugin will be able to rerun crashed tests, assuming the workers and controller are on the same LAN (this assumption is valid for almost all cases because most of the time the workers and controller are on the same computer). If this assumption is not the case, then this functionality may not operate.

Re-run all failures

To re-run all test failures, use the --reruns command line option with the maximum number of times you'd like the tests to run:

$ pytest --reruns 5

Failed fixture or setup_class will also be re-executed.

To add a delay time between re-runs use the --reruns-delay command line option with the amount of seconds that you would like wait before the next test re-run is launched:

$ pytest --reruns 5 --reruns-delay 1

Re-run all failures matching certain expressions

To re-run only those failures that match a certain list of expressions, use the --only-rerun flag and pass it a regular expression. For example, the following would only rerun those errors that match AssertionError:

$ pytest --reruns 5 --only-rerun AssertionError

Passing the flag multiple times accumulates the arguments, so the following would only rerun those errors that match AssertionError or ValueError:

$ pytest --reruns 5 --only-rerun AssertionError --only-rerun ValueError

Re-run all failures other than matching certain expressions

To re-run only those failures that do not match a certain list of expressions, use the --rerun-except flag and pass it a regular expression. For example, the following would only rerun errors other than that match AssertionError:

$ pytest --reruns 5 --rerun-except AssertionError

Passing the flag multiple times accumulates the arguments, so the following would only rerun those errors that does not match with AssertionError or OSError:

$ pytest --reruns 5 --rerun-except AssertionError --rerun-except OSError

Note

When the `AssertionError` comes from the use of the assert keyword, use --rerun-except assert instead:

$ pytest --reruns 5 --rerun-except assert

Re-run individual failures

To mark individual tests as flaky, and have them automatically re-run when they fail, add the flaky mark with the maximum number of times you'd like the test to run:

@pytest.mark.flaky(reruns=5)
def test_example():
    import random
    assert random.choice([True, False])

Note that when teardown fails, two reports are generated for the case, one for the test case and the other for the teardown error.

You can also specify the re-run delay time in the marker:

@pytest.mark.flaky(reruns=5, reruns_delay=2)
def test_example():
    import random
    assert random.choice([True, False])

You can also specify an optional condition in the re-run marker:

@pytest.mark.flaky(reruns=5, condition=sys.platform.startswith("win32"))
def test_example():
   import random
   assert random.choice([True, False])

Exception filtering can be accomplished by specifying regular expressions for only_rerun and rerun_except. They override the --only-rerun and --rerun-except command line arguments, respectively.

Arguments can be a single string:

@pytest.mark.flaky(rerun_except="AssertionError")
def test_example():
    raise AssertionError()

Or a list of strings:

@pytest.mark.flaky(only_rerun=["AssertionError", "ValueError"])
def test_example():
    raise AssertionError()

You can use @pytest.mark.flaky(condition) similarly as @pytest.mark.skipif(condition), see pytest-mark-skipif

@pytest.mark.flaky(reruns=2,condition="sys.platform.startswith('win32')")
def test_example():
    import random
    assert random.choice([True, False])
# totally same as the above
@pytest.mark.flaky(reruns=2,condition=sys.platform.startswith("win32"))
def test_example():
  import random
  assert random.choice([True, False])

Note that the test will re-run for any condition that is truthy.

Output

Here's an example of the output provided by the plugin when run with --reruns 2 and -r aR:

test_report.py RRF

================================== FAILURES ==================================
__________________________________ test_fail _________________________________

    def test_fail():
>       assert False
E       assert False

test_report.py:9: AssertionError
============================ rerun test summary info =========================
RERUN test_report.py::test_fail
RERUN test_report.py::test_fail
============================ short test summary info =========================
FAIL test_report.py::test_fail
======================= 1 failed, 2 rerun in 0.02 seconds ====================

Note that output will show all re-runs. Tests that fail on all the re-runs will be marked as failed.

Compatibility

  • This plugin may not be used with class, module, and package level fixtures.
  • This plugin is not compatible with pytest-xdist's --looponfail flag.
  • This plugin is not compatible with the core --pdb flag.
  • This plugin is not compatible with the plugin flaky, you can only have pytest-rerunfailures or flaky but not both.

Resources

Development

  • Test execution count can be retrieved from the execution_count attribute in test item's object. Example:

    @hookimpl(tryfirst=True)
    def pytest_runtest_makereport(item, call):
        print(item.execution_count)

pytest-rerunfailures's People

Contributors

astraw38 avatar bluetech avatar bobotig avatar borda avatar cfulljames avatar davehunt avatar digitronik avatar edgarostrowski avatar floer32 avatar florianpilz avatar gnikonorov avatar hugovk avatar icemac avatar killachicken avatar klrmn avatar lukasnebr avatar mgorny avatar ntessore avatar olegkuzovkov avatar oscargus avatar pabs3 avatar pre-commit-ci[bot] avatar rgreinho avatar ronnypfannschmidt avatar sallner avatar scarabeusiv avatar sileht avatar the-compiler avatar tltx avatar tomviner avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pytest-rerunfailures's Issues

Percentage Flakiness

As an alternative to #43, if you have an estimate for the flakiness rate of a specific test (say, 50%, or 12%) it would be good if you could specify that on the marker:

@pytest.mark.flaky(max_runs=50, flakiness=.2)

Tests are considered failed if they fail more than the flakiness percentage so that you can disambiguate what is an acceptable level of error from what is something breaking down.

If you could combine this with my suggestion of distinguished causes you could write

@pytest.mark.flaky(cause=lambda exc: isinstance(exc, IOError), flakiness=.33)
@pytest.mark.flaky(cause=lambda exc: isinstance(exc, HTTP404), flakiness=.5)
def test_api():
    ....

to allow each cause to fail up to a certain amount of the time without failing the whole test.

Conflict with pytest-flake8

When having both pytest-flake8 and pytest-rerunfailures activated I get the following traceback on a flake8 error:

Traceback (most recent call last):
  File ".../eggs/pytest-3.10.0-py2.7.egg/_pytest/main.py", line 185, in wrap_session
    session.exitstatus = doit(config, session) or 0
  File ".../eggs/pytest-3.10.0-py2.7.egg/_pytest/main.py", line 225, in _main
    config.hook.pytest_runtestloop(session=session)
  File ".../eggs/pluggy-0.8.0-py2.7.egg/pluggy/hooks.py", line 284, in __call__
    return self._hookexec(self, self.get_hookimpls(), kwargs)
  File ".../eggs/pluggy-0.8.0-py2.7.egg/pluggy/manager.py", line 67, in _hookexec
    return self._inner_hookexec(hook, methods, kwargs)
  File ".../eggs/pluggy-0.8.0-py2.7.egg/pluggy/manager.py", line 61, in <lambda>
    firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
  File ".../eggs/pluggy-0.8.0-py2.7.egg/pluggy/callers.py", line 203, in _multicall
    gen.send(outcome)
  File ".../eggs/pluggy-0.8.0-py2.7.egg/pluggy/callers.py", line 81, in get_result
    _reraise(*ex)  # noqa
  File ".../eggs/pluggy-0.8.0-py2.7.egg/pluggy/callers.py", line 187, in _multicall
    res = hook_impl.function(*args)
  File ".../eggs/pytest-3.10.0-py2.7.egg/_pytest/main.py", line 246, in pytest_runtestloop
    item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem)
  File ".../eggs/pluggy-0.8.0-py2.7.egg/pluggy/hooks.py", line 284, in __call__
    return self._hookexec(self, self.get_hookimpls(), kwargs)
  File ".../eggs/pluggy-0.8.0-py2.7.egg/pluggy/manager.py", line 67, in _hookexec
    return self._inner_hookexec(hook, methods, kwargs)
  File ".../eggs/pluggy-0.8.0-py2.7.egg/pluggy/manager.py", line 61, in <lambda>
    firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
  File ".../eggs/pluggy-0.8.0-py2.7.egg/pluggy/callers.py", line 208, in _multicall
    return outcome.get_result()
  File ".../eggs/pluggy-0.8.0-py2.7.egg/pluggy/callers.py", line 81, in get_result
    _reraise(*ex)  # noqa
  File ".../eggs/pluggy-0.8.0-py2.7.egg/pluggy/callers.py", line 187, in _multicall
    res = hook_impl.function(*args)
  File ".../eggs/pytest_rerunfailures-4.2-py2.7.egg/pytest_rerunfailures.py", line 192, in pytest_runtest_protocol
    _remove_cached_results_from_failed_fixtures(item)
  File ".../eggs/pytest_rerunfailures-4.2-py2.7.egg/pytest_rerunfailures.py", line 130, in _remove_cached_results_from_failed_fixtures
    fixture_info = getattr(item, '_fixtureinfo')
AttributeError: 'Flake8Item' object has no attribute '_fixtureinfo'

setup_class running multiple times

Hi,

we have a class that looks like below:

def setup_class
raises exception

def test_one
pass

def test_two
pass

def test_three
pass

when the setup fails we expect all the tests to fail. However after I installed this plugin, the setup_class is run before every test.

Is there a way to suppress the rerun of setup with every test if the setup_class fails.

This means even if I don't use --reruns option, this changes the default behavior of py.test.

support invert case "rerunpassed"

Would it be in the scope of this plugin to support the invert case "rerunpassed" which reruns passed tests up to N times to ensure that they are not flaky?

I understand that this would be a little "against" the name of the plugin but I assume beside another command line options almost all of the logic will be the same except the condition deciding if the test needs to be retriggered.

I would be happy to provide a pull request if the feature is desired and a consensus on the design is reached. The alternative would be "duplicating" this plugin and naming it pytest-rerunpassed but that would imply a lot of duplicate effort from then on.

pytest-rerunfailures seeing skips as failures

When running a test that includes a @pytest.mark.skipif(), when the test is skipped by pytest the plugin sees this as a failure and tries to rerun the test which is, of course, skipped again. With our Selenium tests, this causes a browser to open and then immediately close, which is unnecessary overhead (as is really any activity required to run and skip the test again).

The plugin should be updated to treat a skip as a pass so it will not be rerun.

Duplicated counting test case numbers with pytest-rerunfailures plugin

If one test cases failed in teardown method with --reruns=1, this case will be calculated as 2 cases.

Test code:

class Test1(object):

    def teardown_method(self, method):
        raise Exception('tewrdown method error')

    def test1(self):
        pass

with --reruns=1:

$ py.test test_rerun.py -v -s --junitxml=test-result.xml --reruns=1
=============================================================== test session starts ================================================================
platform darwin -- Python 2.7.14, pytest-2.9.1, py-1.7.0, pluggy-0.3.1 -- /Users/jzhang/miniconda2/envs/p2/bin/python
cachedir: .cache
rootdir: /Users/jzhang/Desktop/test_log, inifile:
plugins: rerunfailures-4.1
collected 1 items

test_rerun.py::Test1::test1 PASSED
test_rerun.py::Test1::test1 RERUN
test_rerun.py::Test1::test1 PASSED
test_rerun.py::Test1::test1 ERROR

---------------------------------------- generated xml file: /Users/jzhang/Desktop/test_log/test-result.xml ----------------------------------------
====================================================================== ERRORS ======================================================================
_________________________________________________________ ERROR at teardown of Test1.test1 _________________________________________________________

self = <test_rerun.Test1 object at 0x104073190>, method = <bound method Test1.test1 of <test_rerun.Test1 object at 0x104073190>>

    def teardown_method(self, method):
>       raise Exception('tewrdown method error')
E       Exception: tewrdown method error

test_rerun.py:4: Exception
==================================================== 2 passed, 1 error, 1 rerun in 0.02 seconds ====================================================
(p2) # ~/Desktop/test_log [18:10:33] C:1
$ more test-result.xml
<?xml version="1.0" encoding="utf-8"?><testsuite errors="1" failures="0" name="pytest" skips="0" tests="2" time="0.015"><testcase classname="test_rerun.Test1" file="test_rerun.py" line="5" name="test1" time="0.00033712387085"></testcase><testcase classname="test_rerun.Test1" file="test_rerun.py" line="5" name="test1" time="0.000302791595459"><error message="test setup failure">self = &lt;test_rerun.Test1 object at 0x104073190&gt;, method = &lt;bound method Test1.test1 of &lt;test_rerun.Test1 object at 0x104073190&gt;&gt;

    def teardown_method(self, method):
&gt;       raise Exception(&apos;tewrdown method error&apos;)
E       Exception: tewrdown method error

test_rerun.py:4: Exception</error></testcase></testsuite>

It calculated the number of test cases as 2, we can see tests="2".

when run this case without --retrun=1:

$ py.test test_rerun.py -v -s --junitxml=test-result.xml
=============================================================== test session starts ================================================================
platform darwin -- Python 2.7.14, pytest-2.9.1, py-1.7.0, pluggy-0.3.1 -- /Users/jzhang/miniconda2/envs/p2/bin/python
cachedir: .cache
rootdir: /Users/jzhang/Desktop/test_log, inifile:
plugins: rerunfailures-4.1
collected 1 items

test_rerun.py::Test1::test1 PASSED
test_rerun.py::Test1::test1 ERROR

---------------------------------------- generated xml file: /Users/jzhang/Desktop/test_log/test-result.xml ----------------------------------------
====================================================================== ERRORS ======================================================================
_________________________________________________________ ERROR at teardown of Test1.test1 _________________________________________________________

self = <test_rerun.Test1 object at 0x103d40d50>, method = <bound method Test1.test1 of <test_rerun.Test1 object at 0x103d40d50>>

    def teardown_method(self, method):
>       raise Exception('tewrdown method error')
E       Exception: tewrdown method error

test_rerun.py:4: Exception
======================================================== 1 passed, 1 error in 0.02 seconds =========================================================
(p2) # ~/Desktop/test_log [18:19:55] C:1
$ more test-result.xml
<?xml version="1.0" encoding="utf-8"?><testsuite errors="1" failures="0" name="pytest" skips="0" tests="1" time="0.015"><testcase classname="test_rerun.Test1" file="test_rerun.py" line="5" name="test1" time="0.000448226928711"><error message="test setup failure">self = &lt;test_rerun.Test1 object at 0x103d40d50&gt;, method = &lt;bound method Test1.test1 of &lt;test_rerun.Test1 object at 0x103d40d50&gt;&gt;

    def teardown_method(self, method):
&gt;       raise Exception(&apos;tewrdown method error&apos;)
E       Exception: tewrdown method error

test_rerun.py:4: Exception</error></testcase></testsuite>

It calculated the number of test cases as 1, we can see tests="1", it looks more reasonable.

Feature request: rerun test if condition is not met (similar to hypothesis.assume())

Sometimes when running tests with semi-random data (e.g. from factory_boy), you know the test will fail if the generated data matches some condition. Rather than execute the test when you know it will fail, it would be nice if you could tell pytest to re-run that test, probably configurable up to N times. For example:

def test_something_with_different_people():
    person1, person2 = PersonFactory.build_batch(2)
    # if this is True, rerun the test up to N times
    rerun_if(person1.name == person2.name)
    # some test that depends on `person1.name != person2.name`

This is inspired by assume() from hypothesis.

Originally opened as pytest-dev/pytest#3475

Add optiontional "minimum number of passes" to consider a pass

Currently rerunning a test n times is allowed, but as long as it passes once, the test is considered passed. It would be helpful to have a minimum number of passes required for the test to be considered a pass, for more nuanced pass rates.

Example: I know something fails 33% of the time, so I want to run it four times and make sure it passes at least twice, otherwise I should look into it.

rerunfailures doesn't actually rerun when a nested pytest.main() fails

For various reasons that are not under my control, I have a setup where:

  • I am running pytest, which launches a subprocess
  • That subprocess embeds Python, which, in some cases can launch another instance of pytest using pytest.main() - which cannot be run in another subprocess

I have been happily using the rerunsonfailure plugin to re-run failed tests that do not spawn another pytest. However, when one of the nested pytests fails, my outer pytest runner detects this failure properly and indeed rerunsonfailure seems to kind of work as it tells me in the console (I specified --reruns 2):

=== 1 error, 2 rerun in 70.98s (0:01:10) ===

However, it did not actually rerun at all. The outer pytest bailed without re-running the test at all.

I have tried adding command line arguments to the nested pytest to ensure it has its own cache dir, and I even run it disabling the rerunsonfailure plugin (it can't possibly work to have the nested pytest rerun the tests as it has no way of knowing about the original failure...long story).

Anyway, I am so close to nirvana here, but something is causing rerunsonfailure to not work in this particular case. Any ideas?

Lack of rerun attribute on report since 4.2 version

Some Pytest-HTML tests were failing since October and I realised that it was mainly because of that specific line:

test_index = hasattr(report, 'rerun') and report.rerun + 1 or 0

The problem is that until version 4.1, the report had a rerun attribute, but for some reason it was taken off in version 4.2. I wish that it could come back.

Rerun , only when the Pytest fixture or setup class is failed

Hi,

I would like to only rerun the test when the pytest fixture added inside conftest file is failed . I am not concerned about failures in actual test. I am concerned about the failures happening in pytest fixtures . Hence , I would like to rerun only when the pytest fixture or setup class is failed.

Is it possible through rerun failure plugin

pytest-rerunfailures not using fixtures on reruns

Hi @klrmn,

I'm not totally sure that this is what is actually happening, but we were seeing some strange behaviour on moztrap-tests with the plugin active. We have a fixture which is used to log in to Moztrap at the beginning of a test [1]. With the plugin active I am seeing tests where it appears as if the fixture didn't run, because the login isn't even being attempted [2]. After deactivating the plugin this problem disappears, so I am guessing that, when using the plugin, the fixture is used/run for the first test run, but is then not run again for subsequent runs of the same test.

Again, this is just a guess at this point, but I wonder whether you can think of any reason why that might be happening, or whether you might be able to write a test to verify this behaviour.

[1] https://github.com/mozilla/moztrap-tests/blob/master/conftest.py#L13
[2] https://saucelabs.com/jobs/58823318a85041cca8b9c2fc4e0eaf49

Running tox doesn't find CHANGES.rst

When I try to run tox locally, I get:

ERROR: actionid: py34-pytest29
msg: installpkg
cmdargs: [local('/home/florian/proj/pytest-rerunfailures/.tox/py34-pytest29/bin/pip'), 'install', '-U', '--no-deps', '/home/florian/proj/pytest-rerunfailures/.tox/dist/pytest-rerunfailures-2.0.1.dev0.zip']
env: [...]

Processing ./.tox/dist/pytest-rerunfailures-2.0.1.dev0.zip
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/tmp/pip-vcr7ysj5-build/setup.py", line 9, in <module>
        open('CHANGES.rst').read()),
    FileNotFoundError: [Errno 2] No such file or directory: 'CHANGES.rst'

    ----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-vcr7ysj5-build/

I have no idea why this happens. I really can't figure it out. I'm in the correct directory (found out by using os.getcwd() as filename ๐Ÿ˜†), README.rst could be read, CHANGES.rst is there (and I can read it)...

strace says:

[pid 18070] open("README.rst", O_RDONLY|O_CLOEXEC) = 4
[pid 18070] open("CHANGES.rst", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)

Fixtures marked (scope='session') are run on every re-run

I would expect that session fixtures are run once per session - but it seems if a flaky test reruns, it will run session scoped fixtures' teardown code, and then their setup again.

This can result in costly setup/teardown being unnecessarily rerun, and also seems to go against the whole point of labelling the fixture session

Reproduction:

@pytest.fixture(scope="session")
def example_fixture():
    print("\nSetup")
    yield
    print("Teardown")

@pytest.mark.flaky(reruns=1)
def test_example_fixture(example_fixture):
    assert 0

Running with pytest -s -v ... I get:

plugins: timeout-1.2.0, rerunfailures-2.2, repeat-0.4.1, ordering-0.5
collected 1 items 

scripts/autotest/tests/conftest.py::test_example_fixture 
Setup
Teardown
RERUN
scripts/autotest/tests/conftest.py::test_example_fixture 
Setup
Teardown
FAILED

Rerun tests are not reported with "all" setting

When I run pytest -ra path/to/flaky/tests, I expect rerun tests to be listed, but they aren't. I would this is because 'R' is not something that pytest know about.

This can be worked around using pytest -raR path/to/flaky/tests, but it would be nice if -ra automatically included 'R' tests as well.

Incompatible with pytest-timeout plugin

pytest-rerunfailures is not compatible with the pytest-timeout plugin.

Probably b/c the same pytest hooks are used. The first time, the timeout is observed, but when the test case re-runs, there's no timeout.

pytest-timeoutis more widely used and likely would be easier for the fix to be in this plugin if possible..

Here's a test that fails:

import pytest
pytest_plugins = 'pytester'

def test_pytest_timeout_compatibility_fails(testdir):
    """
    Verifies the `pytest-rerunfailures` plugin is compatible with `pytest-timeout`
    plugin when the test case is not done in the allotted time.
    """
    testdir.makepyfile(
        """
        import time
        import pytest
        @pytest.mark.flaky
        @pytest.mark.timeout(timeout=1)
        def test_times_out():
            time.sleep(2)
            assert True
        """
    )
    result = testdir.runpytest()
    outcome = result.parseoutcomes()
    assert outcome.get('rerun') == 1
    assert outcome.get('failed') == 1

When the test case re-runs, it ignores the timeout and passes.

Note: doesn't matter which decorator comes first (flaky or timeout), the behavior is the same.

rerun only errors, not failures

hey @klrmn

is it possible to rerun only errors, not failures?
is it possible to develop additional switch/hook to allow rerun only of errors?

thanks in advance,

pytest run one test function, module's setup and tear-down function would run multi-times.

if test file just have one test function, the module's setup and tear-down function would run multi-times.
log as follow:

/work/python$ py.test -v -s --reruns 5 test_random.py 
=========================================================================================== test session starts ============================================================================================
platform linux2 -- Python 2.7.14, pytest-3.3.1, py-1.5.2, pluggy-0.6.0 -- /usr/bin/python
cachedir: .cache
rootdir: ~/work/python, inifile:
plugins: rerunfailures-4.0, allure-adaptor-1.7.9
collected 1 item                                                                                                                                                                                           

test_random.py::test_example 
random "module"

random "setup"

random

random "teardown function"

random "teardown module"
RERUN                                                                                                                                                                   [100%]
test_random.py::test_example 
random "module"

random "setup"

random

random "teardown function"

random "teardown module"
RERUN                                                                                                                                                                   [200%]
test_random.py::test_example 
random "module"

random "setup"

random

random "teardown function"

random "teardown module"
RERUN                                                                                                                                                                   [300%]
test_random.py::test_example 
random "module"

random "setup"

random

random "teardown function"

random "teardown module"
PASSED                                                                                                                                                                  [400%]

==================================================================================== 1 passed, 3 rerun in 0.02 seconds =====================================================================================

I think the module's setup and tear-down function should be run one time.

weird setup_class behavior with rerun

test_rerun.py

class Test1(object):

    def setup_class(cls):
        print 'setup_class start'
        raise Exception('setup class failure')

    def test1(self):
        pass

    def test2(self):
        pass

    def test3(self):
        pass

pytest 2.9.1 pytest-rerunfailures 4.1
py.test --reruns=1 test_rerun.py -v -s

test_rerun.py::Test1::test1 setup_class start
RERUN
test_rerun.py::Test1::test1 ERROR
test_rerun.py::Test1::test2 RERUN
test_rerun.py::Test1::test2 ERROR
test_rerun.py::Test1::test3 RERUN
test_rerun.py::Test1::test3 setup_class start
ERROR

The output looks confusing. It looks the setup_class rerun happens only for test3.

README should make it clear that *each* test gets rerun *up to* N times

Great plugin! It's so useful for us here... particularly with Jenkins integration, since we have a few flakey browser-based tests.

I just wanted to point out that the current phrasing in the README was a little ambiguous to me. And as a result, I misinterpreted the --reruns count to be a global count. And I didn't like that, so I endeavored to basically fork and modify the functionality so that it was on per-test basis... and then I realized that I had misinterpreted the documentation, and actually the rerun count is per-test.

I think the phrasing describing what --reruns does could be clarified a little bit, to make it clear that each test will get rerun up to N times.

Hook pytest_runtest_logfinish is not triggered for marked tests

Reproduction path:

conftest.py

def pytest_runtest_logfinish():
    print("LogFinish")

test_file

import pytest

@pytest.mark.flaky(reruns=2)
def test_logfinish():
    pass

Run ./virt_env/bin/py.test . -s shows that the hook is not triggered. However it is triggered (the message "LogFinish" is shown) if the flaky mark is removed.

Environment

pytest (4.3.0)
pytest-rerunfailures (6.0)

The 4.2 version bug

In version 4.2, when a file has multiple use cases, the first use case fails and retries without tearDownClass execution and runs setUpClass directly; in version 4.1, this situation does not execute setUpClass

Integrate with JUnit

Use case: I want to analyze my test results over time to find and address flaky tests. My test results are uploaded to a data analysis tool like Kibana using JUnit.

Problem: Pytest's JUnit support does not capture retries. Only the last test run is recorded and no properties are added that indicate retries.

Example JUnit:

<?xml version="1.0" encoding="utf-8"?><testsuites><testsuite errors="0" failures="1" hostname="epage-2328.local" name="pytest" skipped="0" tests="1" time="0.057" timestamp="2019-11-07T08:24:07.922770"><testcase classname="tests.test_foo" file="tests/test_foo.py" line="2" name="test_foo" time="0.003"><failure message="assert False">@pytest.mark.flaky(reruns=5)
    def test_foo():
&gt;       assert False
E       assert False

tests/test_foo.py:5: AssertionError</failure></testcase></testsuite></testsuites>

Suggested fix: Report execution_count using the record_property fixture so we know how many attempts were made.

Workarounds:

  • --result-log (deprecated): Create a parser for result-log and merge in its results
  • Wait until pytest 5.3 is release and use the new --report-log

Rerun output not available when using xdist

So the output when running single threaded works just fine. It looks something like this:

common/automation/webdriver/tests/home_tests.py:207: MobileHomeTests.test_repeat_test PASSED
------------------------------------------------------------------------------------------------------------ 3 failed tests rerun ------------------------------------------------------------------------------------------------------------
common/automation/webdriver/tests/home_tests.py::MobileHomeTests::test_repeat_test: FAILED
common/automation/webdriver/tests/home_tests.py::MobileHomeTests::test_repeat_test: FAILED
common/automation/webdriver/tests/home_tests.py::MobileHomeTests::test_repeat_test: FAILED

When running in xdist I don't get this output. I get something like this:

gw0 [1] / gw1 [1] / gw2 [1]
scheduling tests via LoadScheduling
[gw2] PASSED common/automation/webdriver/tests/home_tests.py:207: MobileHomeTests.test_repeat_test

========================================================================================================= 1 passed in 53.52 seconds =

This was for a test that failed once and was rerun again and passed. Your plugin works great and this is the only thing stopping me from using it with our CI system. Thanks!

Number of times a test is reran?

Is there a way to see how many times one test was reran, i know at the end it shows how many where reran but it dosent show how many times one test was reran

Doesn't seem to work with pytest.main() execution model

When using the pytest module --reruns option doesn't seem to be recognized.

Sample:

import pytest
from pytest import mark

#@pytest.mark.flaky(reruns=5)   #<==== this works great
def test_method4():
    import random
    assert random.choice([True, False, False, False])

# Execute Tests
def main():
    pytest.main(["test.py", "-s", '--reruns 5'])  #<==== this causes an error

if __name__ == "__main__":
    main()

This works:

$ py.test test-rerun.py -s --reruns 5
=========================================================================================================================================================================================================== test session starts ============================================================================================================================================================================================================
platform darwin -- Python 3.6.0, pytest-3.0.8.dev, py-1.4.32, pluggy-0.4.0
rootdir: /Users/wes/work/pytest-testapp, inifile:
plugins: xdist-1.15.0, rerunfailures-2.1.0
collected 1 items

test-rerun.py RR.

==================================================================================================================================================================================================== 1 passed, 2 rerun in 0.05 seconds =====================================================================================================================================================================================================

This does not:

$ python test-rerun.py
usage: test-rerun.py [options] [file_or_dir] [file_or_dir] [...]
test-rerun.py: error: unrecognized arguments: --reruns 5
  inifile: None
  rootdir: /Users/wes/work/pytest-testapp

PluginValidationError:

I am facing a plugin validation error.

$ pytest --reruns 5
Traceback (most recent call last):
File "/home/vizzbee/anaconda3/lib/python3.6/site-packages/_pytest/vendored_packages/pluggy.py", line 510, in load_setuptools_entrypoints
plugin = ep.load()
File "/home/vizzbee/anaconda3/lib/python3.6/site-packages/pkg_resources/init.py", line 2404, in load
self.require(*args, **kwargs)
File "/home/vizzbee/anaconda3/lib/python3.6/site-packages/pkg_resources/init.py", line 2427, in require
items = working_set.resolve(reqs, env, installer, extras=self.extras)
File "/home/vizzbee/anaconda3/lib/python3.6/site-packages/pkg_resources/init.py", line 872, in resolve
raise VersionConflict(dist, req).with_context(dependent_req)
pkg_resources.VersionConflict: (pytest 3.2.1 (/home/vizzbee/anaconda3/lib/python3.6/site-packages), Requirement.parse('pytest>=3.10'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/vizzbee/anaconda3/bin/pytest", line 11, in
sys.exit(main())
File "/home/vizzbee/anaconda3/lib/python3.6/site-packages/_pytest/config.py", line 49, in main
config = _prepareconfig(args, plugins)
File "/home/vizzbee/anaconda3/lib/python3.6/site-packages/_pytest/config.py", line 168, in _prepareconfig
pluginmanager=pluginmanager, args=args)
File "/home/vizzbee/anaconda3/lib/python3.6/site-packages/_pytest/vendored_packages/pluggy.py", line 745, in call
return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
File "/home/vizzbee/anaconda3/lib/python3.6/site-packages/_pytest/vendored_packages/pluggy.py", line 339, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/home/vizzbee/anaconda3/lib/python3.6/site-packages/_pytest/vendored_packages/pluggy.py", line 334, in
_MultiCall(methods, kwargs, hook.spec_opts).execute()
File "/home/vizzbee/anaconda3/lib/python3.6/site-packages/_pytest/vendored_packages/pluggy.py", line 613, in execute
return _wrapped_call(hook_impl.function(*args), self.execute)
File "/home/vizzbee/anaconda3/lib/python3.6/site-packages/_pytest/vendored_packages/pluggy.py", line 250, in _wrapped_call
wrap_controller.send(call_outcome)
File "/home/vizzbee/anaconda3/lib/python3.6/site-packages/_pytest/helpconfig.py", line 68, in pytest_cmdline_parse
config = outcome.get_result()
File "/home/vizzbee/anaconda3/lib/python3.6/site-packages/_pytest/vendored_packages/pluggy.py", line 279, in get_result
raise ex[1].with_traceback(ex[2])
File "/home/vizzbee/anaconda3/lib/python3.6/site-packages/_pytest/vendored_packages/pluggy.py", line 265, in init
self.result = func()
File "/home/vizzbee/anaconda3/lib/python3.6/site-packages/_pytest/vendored_packages/pluggy.py", line 614, in execute
res = hook_impl.function(*args)
File "/home/vizzbee/anaconda3/lib/python3.6/site-packages/_pytest/config.py", line 957, in pytest_cmdline_parse
self.parse(args)
File "/home/vizzbee/anaconda3/lib/python3.6/site-packages/_pytest/config.py", line 1121, in parse
self._preparse(args, addopts=addopts)
File "/home/vizzbee/anaconda3/lib/python3.6/site-packages/_pytest/config.py", line 1084, in _preparse
self.pluginmanager.load_setuptools_entrypoints('pytest11')
File "/home/vizzbee/anaconda3/lib/python3.6/site-packages/_pytest/vendored_packages/pluggy.py", line 515, in load_setuptools_entrypoints
"Plugin %r could not be loaded: %s!" % (ep.name, e))
_pytest.vendored_packages.pluggy.PluginValidationError: Plugin 'rerunfailures' could not be loaded: (pytest 3.2.1 (/home/vizzbee/anaconda3/lib/python3.6/site-packages), Requirement.parse('pytest>=3.10'))!

$ pip list|grep rerunfailures
pytest-rerunfailures 7.0

Backward Compatibility

The plugin was largely rewritten by @davehunt in #23. However we now use get_marker, thus require at least pytest >= 2.4.2. What do you think about backward compatibility? Is it OK to drop support for older versions of pytest?

Not compatible with resultslog

Hi,

The plugin is not compatible with the resultslog module from pytest.

By default the resultslog will not be created. To turn on this feature you need to set the --resultlog or --result-log pytest command line parameter. If we have a failing test and run pytest like so:

$ pytest --result-log=pytest.log --reruns=3

we will get the following error:

...
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "c:\users\eostr\envs\pytest\lib\site-packages\_pytest\resultlog.py", line 86, in pytest_runtest_logreport
INTERNALERROR> self.log_outcome(report, code, longrepr)
INTERNALERROR> UnboundLocalError: local variable 'longrepr' referenced before assignment

This problem is cause because when a test fails for the first time (and not last time) line 86 is executed. The value of report will be set to rerun. In pytest.pytest_runtest_logreport there are a bunch of check that set the value of longrepr based on report. Since rerun is not valid value, longrepr does not get set.

Cheers,
Edgar

rerun failures at session end instead of rerunning instantly

hi,

would it be possible to eg. introduce a new option in the marker than if set would somehow collect failed tests in a list,and rerun that list at end of test session? I tried to achieve something like this myself, by simply

request.session.items.append(request.node)

but it doesn't work with xdist, as it collects the test queue at session start and distributes it to the slave workers.

Add Flake8 verification

Hi, when I was writing my last contribution I realised that rerunfailures doesn't have Flake8 verifications. I wish I could add that (unless you don't have it for some other reason)

Tests stop with --exitfirst, even when rerun successfully

There currently is a problem with --exitfirst. In my understanding, the tests should stop when for the first time a test failed after rerunning it n times. This does happen, but in contrast the tests will also stop when a test failed and succeeded during a rerun.

So in other words, currently the test suite will be run normally and stops at first failure, but this failure will be run n times. However, I would expect the test suite to continue, when a failing test could be rerun successfully.

Part of the solution would be to reset the --exitfirst counter (internally it's implemented with --maxfail num with num = 1 which stops after num failures).

Command line switch to disable?

One feature that I often find myself wishing for is a command line switch to disable reruns entirely, even for tests marked with @pytest.flaky(reruns=N). This would be useful when rewriting/updating tests without commenting/uncommenting the @pytest.flaky decorator.

Add wait time

For testing with infratest it would be nice to add a delay for reruns (in seconds)

Expose execution count on the item level.

It would be great to know the number of executions of a current item (test case) stored in the internal variable of the object.

We have built an integration with a Test Cases Management system in our company, and for the flaky tests that were passed after a certain retry, we would like to mark them with a different status rather than always "passed".

Ignore failed runs

We are using jenkins.
When running with pytest-rerunfailures, the build is marked as failed when there's one failure.
====================== 1 failed, 3 passed in 2.90 seconds ======================
Build step 'Execute shell' marked build as failure
Finished: FAILURE
Is it possible not to report the failed runs if there's at least one successful run. Only if all runs failed after will the test end with failure.

tox -e py27-ptlatest-x hangs and eats all memory

I previously raised this at https://github.com/davehunt/pytest-html/issues/31 but I can reproduce this with rerunfailures' testsuite as well. When running tox -e py27-ptlatest-x, after about a minute, the tests eat up 10 GB of RAM and fail spectacularly with >15000 lines of output:

GLOB sdist-make: /home/florian/proj/pytest-rerunfailures/setup.py
py27-ptlatest-x recreate: /home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x
py27-ptlatest-x installdeps: pytest, pytest-xdist
py27-ptlatest-x inst: /home/florian/proj/pytest-rerunfailures/.tox/dist/pytest-rerunfailures-0.05.zip
py27-ptlatest-x runtests: PYTHONHASHSEED='4015671028'
py27-ptlatest-x runtests: commands[0] | /home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/bin/py.test .
============================= test session starts ==============================
platform linux2 -- Python 2.7.9, pytest-2.8.7, py-1.4.31, pluggy-0.3.1
rootdir: /home/florian/proj/pytest-rerunfailures, inifile: 
plugins: rerunfailures-0.05, xdist-1.13.1
collected 20 items

tests/test_config.py xs
tests/test_functionality.py ...............F.F

=================================== FAILURES ===================================
_____________ TestFunctionality.test_verbose_statuses_with_reruns ______________

self = <test_functionality.TestFunctionality object at 0x7f8c0e4e4e10>
testdir = <Testdir local('/tmp/pytest-of-florian/pytest-16/testdir/test_verbose_statuses_with_reruns0')>

    def test_verbose_statuses_with_reruns(self, testdir):
        test_file = self.variety_of_tests(testdir)

        result = testdir.runpytest('--reruns=2', '--verbose')

        assert self._substring_in_output(' 1 passed', result.outlines)
>       assert self._substring_in_output('test_verbose_statuses_with_reruns.py:3: test_fake_pass PASSED', result.outlines)
E       assert <bound method TestFunctionality._substring_in_output of <test_functionality.TestFunctionality object at 0x7f8c0e4e4e10>>('test_verbose_statuses_with_reruns.py:3: test_fake_pass PASSED', ['============================= test session starts ==============================', 'platform linux2 -- Python 2.7.9,...tatuses_with_reruns0, inifile: ', 'plugins: rerunfailures-0.05, xdist-1.13.1', 'collecting ... collected 5 items', ...])
E        +  where <bound method TestFunctionality._substring_in_output of <test_functionality.TestFunctionality object at 0x7f8c0e4e4e10>> = <test_functionality.TestFunctionality object at 0x7f8c0e4e4e10>._substring_in_output
E        +  and   ['============================= test session starts ==============================', 'platform linux2 -- Python 2.7.9,...tatuses_with_reruns0, inifile: ', 'plugins: rerunfailures-0.05, xdist-1.13.1', 'collecting ... collected 5 items', ...] = <_pytest.pytester.RunResult instance at 0x7f8c0e99c488>.outlines

/home/florian/proj/pytest-rerunfailures/tests/test_functionality.py:296: AssertionError
----------------------------- Captured stdout call -----------------------------
============================= test session starts ==============================
platform linux2 -- Python 2.7.9, pytest-2.8.7, py-1.4.31, pluggy-0.3.1 -- /home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/bin/python2.7
cachedir: .cache
rootdir: /tmp/pytest-of-florian/pytest-16/testdir/test_verbose_statuses_with_reruns0, inifile: 
plugins: rerunfailures-0.05, xdist-1.13.1
collecting ... collected 5 items

test_verbose_statuses_with_reruns.py::test_fake_pass PASSED
test_verbose_statuses_with_reruns.py::test_fake_fail FAILED
test_verbose_statuses_with_reruns.py::test_xfail xfail
test_verbose_statuses_with_reruns.py::test_xpass XPASS
test_verbose_statuses_with_reruns.py::test_flaky_test RERUN

=================================== FAILURES ===================================
________________________________ test_fake_fail ________________________________

    @pytest.mark.nondestructive
    def test_fake_fail():
>       raise Exception, "OMG! fake test failure!"
E       Exception: OMG! fake test failure!

test_verbose_statuses_with_reruns.py:9: Exception
====== 1 failed, 1 passed, 1 xfailed, 1 xpassed, 1 rerun in 0.02 seconds =======
------------------------------
' 1 passed' matched: ====== 1 failed, 1 passed, 1 xfailed, 1 xpassed, 1 rerun in 0.02 seconds =======
------------------------------
'test_verbose_statuses_with_reruns.py:3: test_fake_pass PASSED' not found in:
    ============================= test session starts ==============================
    platform linux2 -- Python 2.7.9, pytest-2.8.7, py-1.4.31, pluggy-0.3.1 -- /home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/bin/python2.7
    cachedir: .cache
    rootdir: /tmp/pytest-of-florian/pytest-16/testdir/test_verbose_statuses_with_reruns0, inifile: 
    plugins: rerunfailures-0.05, xdist-1.13.1
    collecting ... collected 5 items

    test_verbose_statuses_with_reruns.py::test_fake_pass PASSED
    test_verbose_statuses_with_reruns.py::test_fake_fail FAILED
    test_verbose_statuses_with_reruns.py::test_xfail xfail
    test_verbose_statuses_with_reruns.py::test_xpass XPASS
    test_verbose_statuses_with_reruns.py::test_flaky_test RERUN

    =================================== FAILURES ===================================
    ________________________________ test_fake_fail ________________________________

        @pytest.mark.nondestructive
        def test_fake_fail():
    >       raise Exception, "OMG! fake test failure!"
    E       Exception: OMG! fake test failure!

    test_verbose_statuses_with_reruns.py:9: Exception
    ====== 1 failed, 1 passed, 1 xfailed, 1 xpassed, 1 rerun in 0.02 seconds =======

___________ TestFunctionality.test_report_on_with_reruns_with_xdist ____________

self = <test_functionality.TestFunctionality object at 0x7f8c0e677650>
testdir = <Testdir local('/tmp/pytest-of-florian/pytest-16/testdir/test_report_on_with_reruns_with_xdist0')>

    def test_report_on_with_reruns_with_xdist(self, testdir):
        '''This test is identical to test_report_on_with_reruns except it
            also uses xdist's -n flag.
            '''
        # precondition: xdist installed
        self._pytest_xdist_installed(testdir)

        test_file = self.variety_of_tests(testdir)

        result = testdir.runpytest('--reruns=2', '-r fsxXR', '-n 1')

>       assert self._substring_in_output('.FxXR', result.outlines)
E       assert <bound method TestFunctionality._substring_in_output of <test_functionality.TestFunctionality object at 0x7f8c0e677650>>('.FxXR', ['============================= test session starts ==============================', 'platform linux2 -- Python 2.7.9,...e: ', 'plugins: rerunfailures-0.05, xdist-1.13.1', 'gw0 I', '[gw0] node down: Traceback (most recent call last):', ...])
E        +  where <bound method TestFunctionality._substring_in_output of <test_functionality.TestFunctionality object at 0x7f8c0e677650>> = <test_functionality.TestFunctionality object at 0x7f8c0e677650>._substring_in_output
E        +  and   ['============================= test session starts ==============================', 'platform linux2 -- Python 2.7.9,...e: ', 'plugins: rerunfailures-0.05, xdist-1.13.1', 'gw0 I', '[gw0] node down: Traceback (most recent call last):', ...] = <_pytest.pytester.RunResult instance at 0x7f8c0e5bca28>.outlines

/home/florian/proj/pytest-rerunfailures/tests/test_functionality.py:348: AssertionError
----------------------------- Captured stdout call -----------------------------
============================= test session starts ==============================
platform linux2 -- Python 2.7.9, pytest-2.8.7, py-1.4.31, pluggy-0.3.1
rootdir: /tmp/pytest-of-florian/pytest-16/testdir/test_report_on_with_reruns_with_xdist0, inifile: 
plugins: rerunfailures-0.05, xdist-1.13.1
gw0 I
[gw0] node down: Traceback (most recent call last):
  File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/execnet/gateway_base.py", line 841, in _local_receive
    data = loads_internal(data, channel, strconfig)
  File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/execnet/gateway_base.py", line 1350, in loads_internal
    return Unserializer(io, channelfactory, strconfig).load()
  File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/execnet/gateway_base.py", line 1152, in load
    opcode = self.stream.read(1)
  File "/usr/lib/python2.7/StringIO.py", line 127, in read
    _complain_ifclosed(self.closed)
TypeError: 'NoneType' object is not callable

Replacing crashed slave gw0
[gw1] node down: Traceback (most recent call last):

[...]

Replacing crashed slave gw290
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/_pytest/main.py", line 90, in wrap_session
INTERNALERROR>     session.exitstatus = doit(config, session) or 0
INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/_pytest/main.py", line 121, in _main
INTERNALERROR>     config.hook.pytest_runtestloop(session=session)
INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 724, in __call__
INTERNALERROR>     return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 338, in _hookexec
INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 301, in __call__
INTERNALERROR>     return outcome.get_result()
INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 279, in get_result
INTERNALERROR>     _reraise(*ex)  # noqa
INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 264, in __init__
INTERNALERROR>     self.result = func()
INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 299, in <lambda>
INTERNALERROR>     outcome = _CallOutcome(lambda: self.oldcall(hook, hook_impls, kwargs))
INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 333, in <lambda>
INTERNALERROR>     _MultiCall(methods, kwargs, hook.spec_opts).execute()
INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 596, in execute
INTERNALERROR>     res = hook_impl.function(*args)
INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/xdist/dsession.py", line 521, in pytest_runtestloop
INTERNALERROR>     self.loop_once()
INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/xdist/dsession.py", line 539, in loop_once
INTERNALERROR>     call(**kwargs)
INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/xdist/dsession.py", line 603, in slave_errordown
INTERNALERROR>     self._clone_node(node)
INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/xdist/dsession.py", line 665, in _clone_node
INTERNALERROR>     node = self.nodemanager.setup_node(spec, self.queue.put)
INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/xdist/slavemanage.py", line 49, in setup_node
INTERNALERROR>     gw = self.group.makegateway(spec)
INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/execnet/multi.py", line 127, in makegateway
INTERNALERROR>     io = gateway_io.create_io(spec, execmodel=self.execmodel)
INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/execnet/gateway_io.py", line 113, in create_io
INTERNALERROR>     return Popen2IOMaster(args, execmodel)
INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/execnet/gateway_io.py", line 17, in __init__
INTERNALERROR>     self.popen = p = execmodel.PopenPiped(args)
INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/execnet/gateway_base.py", line 178, in PopenPiped
INTERNALERROR>     return self.subprocess.Popen(args, stdout=PIPE, stdin=PIPE)
INTERNALERROR>   File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
INTERNALERROR>     errread, errwrite)
INTERNALERROR>   File "/usr/lib/python2.7/subprocess.py", line 1231, in _execute_child
INTERNALERROR>     self.pid = os.fork()
INTERNALERROR> OSError: [Errno 12] Cannot allocate memory

======================== no tests ran in 65.22 seconds =========================
------------------------------
'.FxXR' not found in:
    ============================= test session starts ==============================
    platform linux2 -- Python 2.7.9, pytest-2.8.7, py-1.4.31, pluggy-0.3.1
    rootdir: /tmp/pytest-of-florian/pytest-16/testdir/test_report_on_with_reruns_with_xdist0, inifile: 
    plugins: rerunfailures-0.05, xdist-1.13.1
    gw0 I
    [gw0] node down: Traceback (most recent call last):
      File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/execnet/gateway_base.py", line 841, in _local_receive
        data = loads_internal(data, channel, strconfig)
      File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/execnet/gateway_base.py", line 1350, in loads_internal
        return Unserializer(io, channelfactory, strconfig).load()
      File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/execnet/gateway_base.py", line 1152, in load
        opcode = self.stream.read(1)
      File "/usr/lib/python2.7/StringIO.py", line 127, in read
        _complain_ifclosed(self.closed)
    TypeError: 'NoneType' object is not callable

    Replacing crashed slave gw0

[...]

    Replacing crashed slave gw290
    INTERNALERROR> Traceback (most recent call last):
    INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/_pytest/main.py", line 90, in wrap_session
    INTERNALERROR>     session.exitstatus = doit(config, session) or 0
    INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/_pytest/main.py", line 121, in _main
    INTERNALERROR>     config.hook.pytest_runtestloop(session=session)
    INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 724, in __call__
    INTERNALERROR>     return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
    INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 338, in _hookexec
    INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)
    INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 301, in __call__
    INTERNALERROR>     return outcome.get_result()
    INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 279, in get_result
    INTERNALERROR>     _reraise(*ex)  # noqa
    INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 264, in __init__
    INTERNALERROR>     self.result = func()
    INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 299, in <lambda>
    INTERNALERROR>     outcome = _CallOutcome(lambda: self.oldcall(hook, hook_impls, kwargs))
    INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 333, in <lambda>
    INTERNALERROR>     _MultiCall(methods, kwargs, hook.spec_opts).execute()
    INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 596, in execute
    INTERNALERROR>     res = hook_impl.function(*args)
    INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/xdist/dsession.py", line 521, in pytest_runtestloop
    INTERNALERROR>     self.loop_once()
    INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/xdist/dsession.py", line 539, in loop_once
    INTERNALERROR>     call(**kwargs)
    INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/xdist/dsession.py", line 603, in slave_errordown
    INTERNALERROR>     self._clone_node(node)
    INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/xdist/dsession.py", line 665, in _clone_node
    INTERNALERROR>     node = self.nodemanager.setup_node(spec, self.queue.put)
    INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/xdist/slavemanage.py", line 49, in setup_node
    INTERNALERROR>     gw = self.group.makegateway(spec)
    INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/execnet/multi.py", line 127, in makegateway
    INTERNALERROR>     io = gateway_io.create_io(spec, execmodel=self.execmodel)
    INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/execnet/gateway_io.py", line 113, in create_io
    INTERNALERROR>     return Popen2IOMaster(args, execmodel)
    INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/execnet/gateway_io.py", line 17, in __init__
    INTERNALERROR>     self.popen = p = execmodel.PopenPiped(args)
    INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/execnet/gateway_base.py", line 178, in PopenPiped
    INTERNALERROR>     return self.subprocess.Popen(args, stdout=PIPE, stdin=PIPE)
    INTERNALERROR>   File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
    INTERNALERROR>     errread, errwrite)
    INTERNALERROR>   File "/usr/lib/python2.7/subprocess.py", line 1231, in _execute_child
    INTERNALERROR>     self.pid = os.fork()
    INTERNALERROR> OSError: [Errno 12] Cannot allocate memory

    ======================== no tests ran in 65.22 seconds =========================

----------------------------- Captured stderr call -----------------------------
This is pytest version 2.8.7, imported from /home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/pytest.pyc
setuptools registered plugins:
  pytest-rerunfailures-0.05 at /home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/rerunfailures/plugin.pyc
  pytest-xdist-1.13.1 at /home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/xdist/plugin.pyc
  pytest-xdist-1.13.1 at /home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/xdist/looponfail.pyc
  pytest-xdist-1.13.1 at /home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/xdist/boxed.pyc
nomatch: '*pytest-xdist*'
    and: u'This is pytest version 2.8.7, imported from /home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/pytest.pyc'
    and: u'setuptools registered plugins:'
    and: u'  pytest-rerunfailures-0.05 at /home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/rerunfailures/plugin.pyc'
fnmatch: '*pytest-xdist*'
   with: u'  pytest-xdist-1.13.1 at /home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/xdist/plugin.pyc'
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/_pytest/main.py", line 90, in wrap_session
INTERNALERROR>     session.exitstatus = doit(config, session) or 0
INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/_pytest/main.py", line 120, in _main
INTERNALERROR>     config.hook.pytest_collection(session=session)
INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 724, in __call__
INTERNALERROR>     return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 338, in _hookexec
INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 333, in <lambda>
INTERNALERROR>     _MultiCall(methods, kwargs, hook.spec_opts).execute()
INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 596, in execute
INTERNALERROR>     res = hook_impl.function(*args)
INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/_pytest/main.py", line 129, in pytest_collection
INTERNALERROR>     return session.perform_collect()
INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/_pytest/main.py", line 565, in perform_collect
INTERNALERROR>     hook.pytest_collection_finish(session=self)
INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 724, in __call__
INTERNALERROR>     return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 338, in _hookexec
INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 333, in <lambda>
INTERNALERROR>     _MultiCall(methods, kwargs, hook.spec_opts).execute()
INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 596, in execute
INTERNALERROR>     res = hook_impl.function(*args)
INTERNALERROR>   File "<remote exec>", line 77, in pytest_collection_finish
INTERNALERROR>   File "<remote exec>", line 23, in sendevent
INTERNALERROR>   File "/home/florian/proj/pytest-rerunfailures/.tox/py27-ptlatest-x/local/lib/python2.7/site-packages/execnet/gateway_base.py", line 716, in send
INTERNALERROR>     raise IOError("cannot send to %r" % (self,))
INTERNALERROR> IOError: cannot send to <Channel id=1 closed>

[...]

Only rerun certain Tests

We are mostly using this plugin to make flaky tests green. (The flakiness is most often out of our scope or only present on the Jenkins machine.)

However, this plugin reruns ALL tests. Since some of our flaky tests need 5 or more runs, using this plugin will slow down the test suite by a magnitude when we introduce a bug which breaks many tests.

Therefore we would be very glad if it would be possible to rerun only certain tests, e.g. only tests with a given marker @flaky or the like.

Anyway, thanks for your work! Without it we would have to deactivate all those flaky tests, which is not a good solution.

Is this project unmaintained

Is it still the case that this project is unmaintained?

I haven't seen any packages with similiar functionality for rerunning flaky tests as this for Python. Is there something else that anyone would recommend, or is this project the best place to start?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.