Giter Club home page Giter Club logo

unittest-parallel's People

Contributors

craigahobbs avatar stewartmiles avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

unittest-parallel's Issues

Running a single test file given a file path

I'm trying out unittest-parallel in my project.

I often do the following with unittest:

python -m unittest -cb path/to/test/file_test.py

The only way to do that with unittest-parallel seems to be

python -m unittest_parallel -b -k path.to.test.file_test -p '*_test.py'

which works fine, but is not as convenient to type manually, as bash/zsh can not help the user complete what is almost a file path (path separator is . and extension is dropped), but not really.

I am also aware that I can still use plain unittest for this use case, but I am wondering if I am just missing something obvious.

How do I use it on my project?

How do I use unittest-parallel in my project?

When I run

unittest-parallel tests.py

I get the following error

unittest-parallel: error: unrecognized arguments: tests.py

Coverage: warning "module-not-measured"

Hello,

Thank you for your good work, tests take 2x less times to run with it comparing to unittest.

But I have a problem. I run with this command:
unittest-parallel -t . -s . -p 'test_*.py' --coverage --coverage-source XXX

where XXX is the name of my package

Just before the coverage report, i have this warning:
Coverage.py warning: Module XXX was previously imported, but not measured (module-not-measured)

Is there a way to fix this warning?

Function level parallelization

Is it possible to detect all test functions, split them in chuncks and run each chunk in a separate process? For Linux fork can be used.

unittest-parallel 1.2.x does not work with setUpClass and tearDownClass

  1. Copy-paste the following text into the "tests.py" file in an empty directory:
import random
import time
import unittest


class Tests1(unittest.TestCase):

    @classmethod
    def setUpClass(cls):
        print('=== setUpClass')
        assert not hasattr(cls, 'bar')
        cls.bar = None

    @classmethod
    def tearDownClass(cls):
        print('=== tearDownClass')
        assert cls.bar is None
        del cls.bar

    def setUp(self):
        print('=== setUp')
        assert not hasattr(self, 'foo')
        self.foo = None

    def tearDown(self):
        print('=== tearDown')
        assert self.foo is None
        del self.foo

    def test_1(self):
        time.sleep(random.random())

    def test_2(self):
        time.sleep(random.random())


class Tests2(unittest.TestCase):

    def test_1(self):
        time.sleep(random.random())

    def test_2(self):
        time.sleep(random.random())
  1. In the directory from step 1, execute the following:
unittest-parallel -v

Actual output - notice setUpClass and tearDownClass are not called:

=== setUp
=== setUp
=== tearDown
test_2 (tests.Tests1) ... ok
test_2 (tests.Tests2) ... ok
test_1 (tests.Tests2) ... ok
=== tearDown
test_1 (tests.Tests1) ... ok

----------------------------------------------------------------------
Ran 4 tests in 0.895s

OK

Expected output:

=== setUpClass
=== setUp
=== tearDown
test_1 (tests.Tests1) ... ok
=== setUp
=== tearDown
test_2 (tests.Tests1) ... ok
=== tearDownClass
test_1 (tests.Tests2) ... ok
test_2 (tests.Tests2) ... ok

----------------------------------------------------------------------
Ran 4 tests in 1.215s

OK

How do i get this to work?

tried unittest-parallel -j 3

with

 class TestA(unittest.TestCase):
   def setUp(self):
    pass

def test_A(self):
    for i in range(10):
        print('A' + str(i) + ': ' + str(datetime.datetime.now()))
        time.sleep(1)

def test_B(self):
    for i in range(10):
        print('B' + str(i) + ': ' + str(datetime.datetime.now()))
        time.sleep(1)

def test_C(self):
    for i in range(10):
        print('C' + str(i) + ': ' + str(datetime.datetime.now()))
        time.sleep(1)


def tearDown(self):
    pass

if name == 'main':
unittest.main()

but it went sequentially

Add support for -k

Thanks for making this package!

Any change you can add support for the -k switch, lets you filter the tests being run?

(As implemented for unittest and pytest)

How to run unittest-parallel from a python code?

Currently I use unittest from a python module something like:

    loader = unittest.TestLoader()
    suite = loader.discover(parser.TestDirectory)

    runner = unittest.TextTestRunner()
    result = runner.run(suite)

To use parallelization I'd probably need to cherry pick snippets of code from your main function (separated by comments like f.e. # Run the tests in parallel).
Would you mind splitting your main function so that I can import those parts easily?

Better output?

With -v option, the program is spitting out a lot of chunks of

...TESTS...
----------------------------------------------------------------------
Ran 4 tests in 3.124s

OK (skipped=4)
...TESTS...
----------------------------------------------------------------------
Ran 2 tests in 0.42s

OK (skipped=2)

This is not really helpful compared to the original unittest output which looks like this:

...TESTS...
----------------------------------------------------------------------
Ran 42 tests in 512.462s

OK (skipped=13)

Maybe a custom counting of the test runner and time took would be helpful?

Parameterize the module I want to use to run tests

I've been using your package and it works great, thank you for building it.

When I run python -m unittest, it executes the unittest module as a script - something like python unittest.py, which gives me some flexibility. For example, when I run Python unit tests inside of TeamCity, I can run python -m teamcity.unittestpy instead to use TeamCity's Python package (essentially a wrapper around unittest module) - that discovers and runs tests, and reports the results in a format that TeamCity can understand.

Do you think that could be added as a parameter, something like unittest-parallel -m teamcity.unittestpy?
By quickly looking at your code and teamcity-messages's documentation, I believe that, in the scenario I described above, this line would run TeamcityTestRunner() instead.

I would be more than happy to work on it and put up a PR, let me know.

Note: This is probably an enhancement and not an issue.

Occasional exception in multiprocessing when running in Python 3.7

I noticed our test-suite sometimes fails with a multiprocessing exception, always the same one (TypeError: an integer is required (got type NoneType)). The chances seems to be around 50-50, and if I run the tests again it usually works. It only happens under Python 3.7.

Tests are run with poetry run unittest-parallel -j 16

Here is an example run: https://github.com/datafold/data-diff/runs/7197722519?check_suite_focus=true

And here is the stack-trace from that run:

Traceback (most recent call last):
  File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/multiprocessing/pool.py", line 121, in worker
    result = (True, func(*args, **kwds))
  File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/multiprocessing/pool.py", line 44, in mapstar
    return list(map(*args))
  File "/home/runner/.cache/pypoetry/virtualenvs/data-diff-DY5pfXRE-py3.7/lib/python3.7/site-packages/unittest_parallel/main.py", line 269, in run_tests
    if self.failfast.is_set():
  File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/multiprocessing/managers.py", line 1088, in is_set
    return self._callmethod('is_set')
  File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/multiprocessing/managers.py", line 818, in _callmethod
    conn.send((self._id, methodname, args, kwds))
  File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/multiprocessing/connection.py", line 206, in send
    self._send_bytes(_ForkingPickler.dumps(obj))
  File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/multiprocessing/connection.py", line 404, in _send_bytes
    self._send(header + buf)
  File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/multiprocessing/connection.py", line 368, in _send
    n = write(self._handle, buf)
TypeError: an integer is required (got type NoneType)
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
  File "/home/runner/.cache/pypoetry/virtualenvs/data-diff-DY5pfXRE-py3.7/bin/unittest-parallel", line 8, in <module>
    sys.exit(main())
  File "/home/runner/.cache/pypoetry/virtualenvs/data-diff-DY5pfXRE-py3.7/lib/python3.7/site-packages/unittest_parallel/main.py", line 116, in main
    results = pool.map(test_manager.run_tests, test_suites)
  File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/multiprocessing/pool.py", line 268, in map
    return self._map_async(func, iterable, mapstar, chunksize).get()
  File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/multiprocessing/pool.py", line 657, in get
    raise self._value
TypeError: an integer is required (got type NoneType)

Generation of JUNITXML

Hi everyone, so sorry if this is not supposed to be raised as an issue, I am really new at github to be honest :)

I found your project MARVELOUS! I works wonder, and honestly has given me a way superior performance than pytest-parallel or pytest-xdist. The only thing I am not been able to find is how to generate JUNITXML like reports from it.

I found inside the flags the usage of the coverage report, which gives me a nice % of how each file of my project has been used, but havent found a report containing the failed and errors from the tests itself (which I will then send to my CodeBuild reports)

Could you please inform me if there is a way? If this is not how I should ask, I definitely apologize in advance.

Thanks!

Hi @craigahobbs This is not an issue, but I don't know where else to post this. I just tried your test runner on my test suite, and it shaved 70% off the runtime of my painfully long integration tests. Plus, in contrast to other parallel runners I tried, it actually worked. So here's a big thank you for your work :-)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.