Giter Club home page Giter Club logo

pytest-clarity's Issues

clarity changes verbosity

It's possible that this is on purpose, but it's jarring to me.
I have found the plugin most useful when paired with -qq to turn off automatically setting -v or -vv, not sure which.

For example, normally running pytest with no flags results in lots of dots, and with -v you get the test list and pass/fail and percentages.
With pytest-clarity, you get all that without the -v.
That's fun for a few tests, but annoying with thousands.

Conflict with pytest-xdist

Hi, it seems this plugin cause xdist to be unable to discover the tests:

$ py.test tests --runslow -x -n auto
================================================================================================ test session starts ================================================================================================
platform linux -- Python 3.6.1, pytest-3.8.0, py-1.5.4, pluggy-0.7.1
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/emmanuel/projects/parsec-cloud/.hypothesis/examples')
rootdir: /home/emmanuel/projects/parsec-cloud, inifile: setup.cfg
plugins: xdist-1.23.2, trio-0.5.0, profiling-1.3.0, forked-0.2, cov-2.5.1, hypothesis-3.74.0
gw0 [374] / gw1 [374] / gw2 [374] / gw3 [374]
scheduling tests via LoadScheduling
...................................................................s................................s.....................................................................x..............................x... [ 54%]
..........................................................s......xssss.............x.................x............ssssss...................ssssssssss.sssssssss..........                                     [100%]
================================================================================ 337 passed, 32 skipped, 5 xfailed in 41.08 seconds =================================================================================
$ pip install pytest-clarity
Collecting pytest-clarity
Requirement already satisfied: pytest>=3.5.0 in ./venv/lib/python3.6/site-packages (from pytest-clarity) (3.8.0)
Requirement already satisfied: termcolor==1.1.0 in ./venv/lib/python3.6/site-packages (from pytest-clarity) (1.1.0)
Requirement already satisfied: atomicwrites>=1.0 in ./venv/lib/python3.6/site-packages (from pytest>=3.5.0->pytest-clarity) (1.1.5)
Requirement already satisfied: attrs>=17.4.0 in ./venv/lib/python3.6/site-packages (from pytest>=3.5.0->pytest-clarity) (18.1.0)
Requirement already satisfied: pluggy>=0.7 in ./venv/lib/python3.6/site-packages (from pytest>=3.5.0->pytest-clarity) (0.7.1)
Requirement already satisfied: more-itertools>=4.0.0 in ./venv/lib/python3.6/site-packages (from pytest>=3.5.0->pytest-clarity) (4.3.0)
Requirement already satisfied: six>=1.10.0 in ./venv/lib/python3.6/site-packages (from pytest>=3.5.0->pytest-clarity) (1.11.0)
Requirement already satisfied: setuptools in ./venv/lib/python3.6/site-packages (from pytest>=3.5.0->pytest-clarity) (40.0.0)
Requirement already satisfied: py>=1.5.0 in ./venv/lib/python3.6/site-packages (from pytest>=3.5.0->pytest-clarity) (1.5.4)
Installing collected packages: pytest-clarity
Successfully installed pytest-clarity-0.1.0a1
$ py.test tests --runslow -x -n auto
================================================================================================ test session starts ================================================================================================
platform linux -- Python 3.6.1, pytest-3.8.0, py-1.5.4, pluggy-0.7.1 -- /home/emmanuel/projects/parsec-cloud/venv/bin/python3.6
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/emmanuel/projects/parsec-cloud/.hypothesis/examples')
rootdir: /home/emmanuel/projects/parsec-cloud, inifile: setup.cfg
plugins: xdist-1.23.2, trio-0.5.0, profiling-1.3.0, forked-0.2, cov-2.5.1, clarity-0.1.0a1, hypothesis-3.74.0
[gw0] linux Python 3.6.1 cwd: /home/emmanuel/projects/parsec-cloud
[gw1] linux Python 3.6.1 cwd: /home/emmanuel/projects/parsec-cloud
[gw2] linux Python 3.6.1 cwd: /home/emmanuel/projects/parsec-cloud
[gw3] linux Python 3.6.1 cwd: /home/emmanuel/projects/parsec-cloud
[gw0] Python 3.6.1 (default, Jul 16 2017, 14:02:57)  -- [GCC 5.4.0 20160609]
[gw1] Python 3.6.1 (default, Jul 16 2017, 14:02:57)  -- [GCC 5.4.0 20160609]
[gw2] Python 3.6.1 (default, Jul 16 2017, 14:02:57)  -- [GCC 5.4.0 20160609]
[gw3] Python 3.6.1 (default, Jul 16 2017, 14:02:57)  -- [GCC 5.4.0 20160609]
gw0 [0] / gw1 [0] / gw2 [0] / gw3 [0]
scheduling tests via LoadScheduling

=========================================================================================== no tests ran in 1.95 seconds ============================================================================================
$ 

Support for unittest asserts

It would be great to add support for unittest's asserts.

For example:

import unittest


class TestClarity(unittest.TestCase):
    def test_clarity_assert(self):

        left = [1, 2, 3]
        right = [1, 2, 4]
        assert left == right

    def test_clarity_self_assert(self):

        left = [1, 2, 3]
        right = [1, 2, 4]
        self.assertEqual(left, right)


if __name__ == "__main__":
    unittest.main()

assert is nicely coloured:

image

unittest.assertEqual isn't:

image

disable output coloring

Hi,

In pytest project I'm using allure report to capture the assertion errors, but due to the shell coloring, the ouput seems off:
image
I was wondering if you think that adding --no-diff-color option?

No coverage reporting

The contributing guidelines say, "please ensure the coverage at least stays the same before you submit a pull request," but there is no coverage reporting when running the tests with "tox". Looks like "coverage" was removed from the requirements in 0bdbaa8.

(also, btw: the flake8 tox environment fails because flake8 isn't installed.)

Cannot show the diff if using mock

Diffing in unittest.mock is not supported.

Note: Sample output is manageable at the level below but common tests contain more than the sample data.

Sample test:

from unittest import mock

def test_clarity():
    mocked_object = mock.MagicMock()
    mocked_object.save({"data": "wrong data"})

    mocked_object.save.assert_called_with({"data": "expected data"})

Sample output:

image\

expression drill-down gone?

Hi,

Thanks for a great plugin!

While the diffs are super helpful, it looks like we lose the very handy drill-down output:

$ cat test.py
a = [1, 2, 3]
b = [1, 2]

def test_a():
    assert len(a) == len(b)

without plugin

    def test_a():
>       assert len(a) == len(b)
E       assert 3 == 2
E        +  where 3 = len([1, 2, 3])        <-- i miss these
E        +  and   2 = len([1, 2])

with plugin

    def test_a():
>       assert len(a) == len(b)
E       assert left == right failed.
E         Showing split diff:
E
E         left:  3
E         right: 2      <--  really nice!

perhaps @nicoddemus has any advice/ideas on how to make this possible?

Assertions in other files are not diffed

Assertions raised from outside the test file are not diffed. Here's an example.

Is this a failure in pytest_assertrepr_compare maybe?

Baseline - beautiful.

def test_diff():
    assert {"1": 1} == {"2": 1}
E         LHS vs RHS shown below
E
E         {'1': 1}
E         {'2': 1}

Assert in non-test function, still beautiful.

def over_here():
    assert {"1": 1} == {"2": 1}

def test_diff():
    over_here()
E         LHS vs RHS shown below
E
E         {'1': 1}
E         {'2': 1}

Assert in another file, dang.

from tests.utils import over_here

def test_diff():
    over_here()
# tests/utils.py
def over_here():
    assert {"1": 1} == {"2": 1}
    def over_here():
>       assert {"1": 1} == {"2": 1}
E       AssertionError

tests/utils/__init__.py:120: AssertionError

Python 3.8 incompatibility: DeprecationWarning: collections.abc

Python 3.7 is showing the following

/usr/lib/python3.7/site-packages/pytest_clarity/hints.py:1: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
  from collections import Sequence

I guess that means it is broken on py38

[BUG] Confusing output if LF at EOL is the only difference

pytest-clarity reduces clarity of the diff if two similar strings only have \n at the end as the difference.

This is the normal pytest output (without any verbosity):

>       assert 'Not enough operands\n' == 'Not enough operands'
E       AssertionError: assert 'Not enough operands\n' == 'Not enough operands'
E         - Not enough operands
E         + Not enough operands
E         ?                    +

That + at the end means that the first string has an LF at the end and the second one does not.

This is the pytest-clarity output (with triple vebosity):

>       assert 'Not enough operands\n' == 'Not enough operands'
E       assert == failed. [pytest-clarity diff shown]
E
E         LHS vs RHS shown below
E
E         Not enough operands
E
>       assert 'Not enough operands' == 'Not enough operands\n'
E       assert == failed. [pytest-clarity diff shown]
E
E         LHS vs RHS shown below
E
E         Not enough operands
E

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.