craigahobbs / unittest-parallel Goto Github PK
View Code? Open in Web Editor NEWParallel unit test runner for Python with coverage support
License: MIT License
Parallel unit test runner for Python with coverage support
License: MIT License
Hi!
Do you support Windows?
I'm trying out unittest-parallel in my project.
I often do the following with unittest
:
python -m unittest -cb path/to/test/file_test.py
The only way to do that with unittest-parallel
seems to be
python -m unittest_parallel -b -k path.to.test.file_test -p '*_test.py'
which works fine, but is not as convenient to type manually, as bash/zsh can not help the user complete what is almost a file path (path separator is .
and extension is dropped), but not really.
I am also aware that I can still use plain unittest
for this use case, but I am wondering if I am just missing something obvious.
How do I use unittest-parallel
in my project?
When I run
unittest-parallel tests.py
I get the following error
unittest-parallel: error: unrecognized arguments: tests.py
Hello,
Thank you for your good work, tests take 2x less times to run with it comparing to unittest.
But I have a problem. I run with this command:
unittest-parallel -t . -s . -p 'test_*.py' --coverage --coverage-source XXX
where XXX is the name of my package
Just before the coverage report, i have this warning:
Coverage.py warning: Module XXX was previously imported, but not measured (module-not-measured)
Is there a way to fix this warning?
Is it possible to detect all test functions, split them in chuncks and run each chunk in a separate process? For Linux fork can be used.
import random
import time
import unittest
class Tests1(unittest.TestCase):
@classmethod
def setUpClass(cls):
print('=== setUpClass')
assert not hasattr(cls, 'bar')
cls.bar = None
@classmethod
def tearDownClass(cls):
print('=== tearDownClass')
assert cls.bar is None
del cls.bar
def setUp(self):
print('=== setUp')
assert not hasattr(self, 'foo')
self.foo = None
def tearDown(self):
print('=== tearDown')
assert self.foo is None
del self.foo
def test_1(self):
time.sleep(random.random())
def test_2(self):
time.sleep(random.random())
class Tests2(unittest.TestCase):
def test_1(self):
time.sleep(random.random())
def test_2(self):
time.sleep(random.random())
unittest-parallel -v
Actual output - notice setUpClass and tearDownClass are not called:
=== setUp
=== setUp
=== tearDown
test_2 (tests.Tests1) ... ok
test_2 (tests.Tests2) ... ok
test_1 (tests.Tests2) ... ok
=== tearDown
test_1 (tests.Tests1) ... ok
----------------------------------------------------------------------
Ran 4 tests in 0.895s
OK
Expected output:
=== setUpClass
=== setUp
=== tearDown
test_1 (tests.Tests1) ... ok
=== setUp
=== tearDown
test_2 (tests.Tests1) ... ok
=== tearDownClass
test_1 (tests.Tests2) ... ok
test_2 (tests.Tests2) ... ok
----------------------------------------------------------------------
Ran 4 tests in 1.215s
OK
unittest-parallel is currently using the default process spawning behavior of the multiprocessing module (i.e fork
on Linux) which can lead to subprocesses hanging. For a detailed explanation see https://britishgeologicalsurvey.github.io/science/python-forking-vs-spawn/
Would it be possible to add an option which stops the test run on the first error or failure?
Like: https://docs.python.org/3/library/unittest.html#cmdoption-unittest-f
It looks like failfast=True could be passed to unittest.TextTestRunner
https://docs.python.org/3/library/unittest.html#unittest.TextTestRunner
Though I imagine there is other work that would need to be done to shutdown the multiprocessing.Pool so other tests do not continue to run?
tried unittest-parallel -j 3
with
class TestA(unittest.TestCase):
def setUp(self):
pass
def test_A(self):
for i in range(10):
print('A' + str(i) + ': ' + str(datetime.datetime.now()))
time.sleep(1)
def test_B(self):
for i in range(10):
print('B' + str(i) + ': ' + str(datetime.datetime.now()))
time.sleep(1)
def test_C(self):
for i in range(10):
print('C' + str(i) + ': ' + str(datetime.datetime.now()))
time.sleep(1)
def tearDown(self):
pass
if name == 'main':
unittest.main()
but it went sequentially
Thanks for making this package!
Any change you can add support for the -k
switch, lets you filter the tests being run?
(As implemented for unittest and pytest)
Currently I use unittest from a python module something like:
loader = unittest.TestLoader()
suite = loader.discover(parser.TestDirectory)
runner = unittest.TextTestRunner()
result = runner.run(suite)
To use parallelization I'd probably need to cherry pick snippets of code from your main function (separated by comments like f.e. # Run the tests in parallel).
Would you mind splitting your main function so that I can import those parts easily?
With -v
option, the program is spitting out a lot of chunks of
...TESTS...
----------------------------------------------------------------------
Ran 4 tests in 3.124s
OK (skipped=4)
...TESTS...
----------------------------------------------------------------------
Ran 2 tests in 0.42s
OK (skipped=2)
This is not really helpful compared to the original unittest output which looks like this:
...TESTS...
----------------------------------------------------------------------
Ran 42 tests in 512.462s
OK (skipped=13)
Maybe a custom counting of the test runner and time took would be helpful?
I've been using your package and it works great, thank you for building it.
When I run python -m unittest
, it executes the unittest
module as a script - something like python unittest.py
, which gives me some flexibility. For example, when I run Python unit tests inside of TeamCity, I can run python -m teamcity.unittestpy
instead to use TeamCity's Python package (essentially a wrapper around unittest
module) - that discovers and runs tests, and reports the results in a format that TeamCity can understand.
Do you think that could be added as a parameter, something like unittest-parallel -m teamcity.unittestpy
?
By quickly looking at your code and teamcity-messages's documentation, I believe that, in the scenario I described above, this line would run TeamcityTestRunner()
instead.
I would be more than happy to work on it and put up a PR, let me know.
Note: This is probably an enhancement and not an issue.
They are supported by unittest, and make shared setup much more efficient. (like setting up a database connection for multiple tests)
Hi, thank you for this tool
Just pointing out that I had to add __init__.py
to my tests
dir for this tool to work
I noticed our test-suite sometimes fails with a multiprocessing exception, always the same one (TypeError: an integer is required (got type NoneType)
). The chances seems to be around 50-50, and if I run the tests again it usually works. It only happens under Python 3.7.
Tests are run with poetry run unittest-parallel -j 16
Here is an example run: https://github.com/datafold/data-diff/runs/7197722519?check_suite_focus=true
And here is the stack-trace from that run:
Traceback (most recent call last):
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/multiprocessing/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/multiprocessing/pool.py", line 44, in mapstar
return list(map(*args))
File "/home/runner/.cache/pypoetry/virtualenvs/data-diff-DY5pfXRE-py3.7/lib/python3.7/site-packages/unittest_parallel/main.py", line 269, in run_tests
if self.failfast.is_set():
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/multiprocessing/managers.py", line 1088, in is_set
return self._callmethod('is_set')
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/multiprocessing/managers.py", line 818, in _callmethod
conn.send((self._id, methodname, args, kwds))
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/multiprocessing/connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/multiprocessing/connection.py", line 404, in _send_bytes
self._send(header + buf)
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
TypeError: an integer is required (got type NoneType)
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/runner/.cache/pypoetry/virtualenvs/data-diff-DY5pfXRE-py3.7/bin/unittest-parallel", line 8, in <module>
sys.exit(main())
File "/home/runner/.cache/pypoetry/virtualenvs/data-diff-DY5pfXRE-py3.7/lib/python3.7/site-packages/unittest_parallel/main.py", line 116, in main
results = pool.map(test_manager.run_tests, test_suites)
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/multiprocessing/pool.py", line 268, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/multiprocessing/pool.py", line 657, in get
raise self._value
TypeError: an integer is required (got type NoneType)
Hi everyone, so sorry if this is not supposed to be raised as an issue, I am really new at github to be honest :)
I found your project MARVELOUS! I works wonder, and honestly has given me a way superior performance than pytest-parallel or pytest-xdist. The only thing I am not been able to find is how to generate JUNITXML like reports from it.
I found inside the flags the usage of the coverage report, which gives me a nice % of how each file of my project has been used, but havent found a report containing the failed and errors from the tests itself (which I will then send to my CodeBuild reports)
Could you please inform me if there is a way? If this is not how I should ask, I definitely apologize in advance.
Hi @craigahobbs This is not an issue, but I don't know where else to post this. I just tried your test runner on my test suite, and it shaved 70% off the runtime of my painfully long integration tests. Plus, in contrast to other parallel runners I tried, it actually worked. So here's a big thank you for your work :-)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.