Giter Club home page Giter Club logo

hypothesis-auto's Introduction

hypothesis-auto - Fully Automatic Tests for Type Annotated Functions Using Hypothesis.


PyPI version Build Status codecov Join the chat at https://gitter.im/timothycrosley/hypothesis-auto License Downloads


Read Latest Documentation - Browse GitHub Code Repository


hypothesis-auto is an extension for the Hypothesis project that enables fully automatic tests for type annotated functions.

Hypothesis Pytest Auto Example

Key Features:

  • Type Annotation Powered: Utilize your function's existing type annotations to build dozens of test cases automatically.
  • Low Barrier: Start utilizing property-based testing in the lowest barrier way possible. Just run auto_test(FUNCTION) to run dozens of test.
  • pytest Compatible: Like Hypothesis itself, hypothesis-auto has built-in compatibility with the popular pytest testing framework. This means that you can turn your automatically generated tests into individual pytest test cases with one line.
  • Scales Up: As you find your self needing to customize your auto_test cases, you can easily utilize all the features of Hypothesis, including custom strategies per a parameter.

Installation:

To get started - install hypothesis-auto into your projects virtual environment:

pip3 install hypothesis-auto

OR

poetry add hypothesis-auto

OR

pipenv install hypothesis-auto

Usage Examples:

!!! warning In old usage examples you will see _ prefixed parameters like _auto_verify=. This was done to avoid conflicting with existing function parameters. Based on community feedback the project switched to _ suffixes, such as auto_verify_= to keep the likely hood of conflicting low while avoiding the connotation of private parameters.

Framework independent usage

Basic auto_test usage:

from hypothesis_auto import auto_test


def add(number_1: int, number_2: int = 1) -> int:
    return number_1 + number_2


auto_test(add)  # 50 property based scenarios are generated and ran against add
auto_test(add, auto_runs_=1_000)  # Let's make that 1,000

Adding an allowed exception:

from hypothesis_auto import auto_test


def divide(number_1: int, number_2: int) -> int:
    return number_1 / number_2

auto_test(divide)

-> 1012                     raise the_error_hypothesis_found
   1013
   1014         for attrib in dir(test):

<ipython-input-2-65a3aa66e9f9> in divide(number_1, number_2)
      1 def divide(number_1: int, number_2: int) -> int:
----> 2     return number_1 / number_2
      3

0/0

ZeroDivisionError: division by zero


auto_test(divide, auto_allow_exceptions_=(ZeroDivisionError, ))

Using auto_test with a custom verification method:

from hypothesis_auto import Scenario, auto_test


def add(number_1: int, number_2: int = 1) -> int:
    return number_1 + number_2


def my_custom_verifier(scenario: Scenario):
    if scenario.kwargs["number_1"] > 0 and scenario.kwargs["number_2"] > 0:
        assert scenario.result > scenario.kwargs["number_1"]
        assert scenario.result > scenario.kwargs["number_2"]
    elif scenario.kwargs["number_1"] < 0 and scenario.kwargs["number_2"] < 0:
        assert scenario.result < scenario.kwargs["number_1"]
        assert scenario.result < scenario.kwargs["number_2"]
    else:
        assert scenario.result >= min(scenario.kwargs.values())
        assert scenario.result <= max(scenario.kwargs.values())


auto_test(add, auto_verify_=my_custom_verifier)

Custom verification methods should take a single Scenario and raise an exception to signify errors.

For the full set of parameters, you can pass into auto_test see its API reference documentation.

pytest usage

Using auto_pytest_magic to auto-generate dozens of pytest test cases:

from hypothesis_auto import auto_pytest_magic


def add(number_1: int, number_2: int = 1) -> int:
    return number_1 + number_2


auto_pytest_magic(add)

Using auto_pytest to run dozens of test case within a temporary directory:

from hypothesis_auto import auto_pytest


def add(number_1: int, number_2: int = 1) -> int:
    return number_1 + number_2


@auto_pytest()
def test_add(test_case, tmpdir):
    tmpdir.mkdir().chdir()
    test_case()

Using auto_pytest_magic with a custom verification method:

from hypothesis_auto import Scenario, auto_pytest


def add(number_1: int, number_2: int = 1) -> int:
    return number_1 + number_2


def my_custom_verifier(scenario: Scenario):
    if scenario.kwargs["number_1"] > 0 and scenario.kwargs["number_2"] > 0:
        assert scenario.result > scenario.kwargs["number_1"]
        assert scenario.result > scenario.kwargs["number_2"]
    elif scenario.kwargs["number_1"] < 0 and scenario.kwargs["number_2"] < 0:
        assert scenario.result < scenario.kwargs["number_1"]
        assert scenario.result < scenario.kwargs["number_2"]
    else:
        assert scenario.result >= min(scenario.kwargs.values())
        assert scenario.result <= max(scenario.kwargs.values())


auto_pytest_magic(add, auto_verify_=my_custom_verifier)

Custom verification methods should take a single Scenario and raise an exception to signify errors.

For the full reference of the pytest integration API see the API reference documentation.

Why Create hypothesis-auto?

I wanted a no/low resistance way to start incorporating property-based tests across my projects. Such a solution that also encouraged the use of type hints was a win/win for me.

I hope you too find hypothesis-auto useful!

~Timothy Crosley

hypothesis-auto's People

Contributors

cclauss avatar fabaff avatar hugovk avatar sam-writer avatar sbraz avatar timothycrosley avatar tlambert03 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hypothesis-auto's Issues

Support `Any` type

Hi!

I have tried this code:

from hypothesis_auto import auto_pytest_magic
from typing import Any

def under_test(arg: Any) -> bool:
    return arg == arg

auto_pytest_magic(under_test)

And it does not work:

_______________________________________________________ ERROR collecting ex.py _______________________________________________________
.venv/lib/python3.8/site-packages/pluggy/hooks.py:286: in __call__
    return self._hookexec(self, self.get_hookimpls(), kwargs)
.venv/lib/python3.8/site-packages/pluggy/manager.py:93: in _hookexec
    return self._inner_hookexec(hook, methods, kwargs)
.venv/lib/python3.8/site-packages/pluggy/manager.py:84: in <lambda>
    self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
.venv/lib/python3.8/site-packages/_pytest/python.py:246: in pytest_pycollect_makeitem
    res = list(collector._genfunctions(name, obj))
.venv/lib/python3.8/site-packages/_pytest/python.py:413: in _genfunctions
    self.ihook.pytest_generate_tests.call_extra(methods, dict(metafunc=metafunc))
.venv/lib/python3.8/site-packages/pluggy/hooks.py:324: in call_extra
    return self(**kwargs)
.venv/lib/python3.8/site-packages/pluggy/hooks.py:286: in __call__
    return self._hookexec(self, self.get_hookimpls(), kwargs)
.venv/lib/python3.8/site-packages/pluggy/manager.py:93: in _hookexec
    return self._inner_hookexec(hook, methods, kwargs)
.venv/lib/python3.8/site-packages/pluggy/manager.py:84: in <lambda>
    self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
.venv/lib/python3.8/site-packages/_pytest/python.py:139: in pytest_generate_tests
    metafunc.parametrize(*marker.args, **marker.kwargs, _param_mark=marker)  # type: ignore[misc] # noqa: F821
.venv/lib/python3.8/site-packages/_pytest/python.py:915: in parametrize
    argnames, parameters = ParameterSet._for_parametrize(
.venv/lib/python3.8/site-packages/_pytest/mark/structures.py:108: in _for_parametrize
    parameters = cls._parse_parametrize_parameters(argvalues, force_tuple)
.venv/lib/python3.8/site-packages/_pytest/mark/structures.py:101: in _parse_parametrize_parameters
    return [
.venv/lib/python3.8/site-packages/_pytest/mark/structures.py:101: in <listcomp>
    return [
.venv/lib/python3.8/site-packages/hypothesis_auto/tester.py:161: in auto_test_cases
    for parameters in auto_parameters(auto_function_, *args, auto_limit_=auto_limit_, **kwargs):
.venv/lib/python3.8/site-packages/hypothesis_auto/tester.py:126: in auto_parameters
    yield strategy.example()
.venv/lib/python3.8/site-packages/hypothesis/strategies/_internal/strategies.py:314: in example
    example_generating_inner_function()
.venv/lib/python3.8/site-packages/hypothesis/strategies/_internal/strategies.py:302: in example_generating_inner_function
    @settings(
.venv/lib/python3.8/site-packages/hypothesis/core.py:420: in process_arguments_to_given
    search_strategy.validate()
.venv/lib/python3.8/site-packages/hypothesis/strategies/_internal/strategies.py:377: in validate
    self.do_validate()
.venv/lib/python3.8/site-packages/hypothesis/strategies/_internal/collections.py:39: in do_validate
    s.validate()
.venv/lib/python3.8/site-packages/hypothesis/strategies/_internal/strategies.py:377: in validate
    self.do_validate()
.venv/lib/python3.8/site-packages/hypothesis/strategies/_internal/strategies.py:638: in do_validate
    self.mapped_strategy.validate()
.venv/lib/python3.8/site-packages/hypothesis/strategies/_internal/strategies.py:377: in validate
    self.do_validate()
.venv/lib/python3.8/site-packages/hypothesis/strategies/_internal/lazy.py:118: in do_validate
    w.validate()
.venv/lib/python3.8/site-packages/hypothesis/strategies/_internal/strategies.py:377: in validate
    self.do_validate()
.venv/lib/python3.8/site-packages/hypothesis/strategies/_internal/strategies.py:638: in do_validate
    self.mapped_strategy.validate()
.venv/lib/python3.8/site-packages/hypothesis/strategies/_internal/strategies.py:377: in validate
    self.do_validate()
.venv/lib/python3.8/site-packages/hypothesis/strategies/_internal/collections.py:39: in do_validate
    s.validate()
.venv/lib/python3.8/site-packages/hypothesis/strategies/_internal/strategies.py:377: in validate
    self.do_validate()
.venv/lib/python3.8/site-packages/hypothesis/strategies/_internal/lazy.py:118: in do_validate
    w.validate()
.venv/lib/python3.8/site-packages/hypothesis/strategies/_internal/strategies.py:377: in validate
    self.do_validate()
.venv/lib/python3.8/site-packages/hypothesis/strategies/_internal/strategies.py:638: in do_validate
    self.mapped_strategy.validate()
.venv/lib/python3.8/site-packages/hypothesis/strategies/_internal/strategies.py:377: in validate
    self.do_validate()
.venv/lib/python3.8/site-packages/hypothesis/strategies/_internal/lazy.py:118: in do_validate
    w.validate()
.venv/lib/python3.8/site-packages/hypothesis/strategies/_internal/strategies.py:377: in validate
    self.do_validate()
.venv/lib/python3.8/site-packages/hypothesis/strategies/_internal/collections.py:39: in do_validate
    s.validate()
.venv/lib/python3.8/site-packages/hypothesis/strategies/_internal/strategies.py:377: in validate
    self.do_validate()
.venv/lib/python3.8/site-packages/hypothesis/strategies/_internal/strategies.py:638: in do_validate
    self.mapped_strategy.validate()
.venv/lib/python3.8/site-packages/hypothesis/strategies/_internal/strategies.py:377: in validate
    self.do_validate()
.venv/lib/python3.8/site-packages/hypothesis/strategies/_internal/collections.py:39: in do_validate
    s.validate()
.venv/lib/python3.8/site-packages/hypothesis/strategies/_internal/strategies.py:377: in validate
    self.do_validate()
.venv/lib/python3.8/site-packages/hypothesis/strategies/_internal/lazy.py:118: in do_validate
    w.validate()
.venv/lib/python3.8/site-packages/hypothesis/strategies/_internal/strategies.py:378: in validate
    self.is_empty
.venv/lib/python3.8/site-packages/hypothesis/strategies/_internal/strategies.py:125: in accept
    recur(self)
.venv/lib/python3.8/site-packages/hypothesis/strategies/_internal/strategies.py:121: in recur
    mapping[strat] = getattr(strat, calculation)(recur)
.venv/lib/python3.8/site-packages/hypothesis/strategies/_internal/deferred.py:80: in calc_is_empty
    return recur(self.wrapped_strategy)
.venv/lib/python3.8/site-packages/hypothesis/strategies/_internal/deferred.py:43: in wrapped_strategy
    result = self.__definition()
.venv/lib/python3.8/site-packages/hypothesis/strategies/_internal/core.py:1385: in <lambda>
    lambda thing: deferred(lambda: _from_type(thing)),
.venv/lib/python3.8/site-packages/hypothesis/strategies/_internal/core.py:1473: in _from_type
    return types.from_typing_type(thing)
.venv/lib/python3.8/site-packages/hypothesis/strategies/_internal/types.py:229: in from_typing_type
    raise ResolutionFailed(
E   hypothesis.errors.ResolutionFailed: Could not resolve typing.Any to a strategy; consider using register_type_strategy

return values are not checked

def addition(a: int, b: int) -> str:
    return a+b

auto_pytest_magic(addition)

def test_auto():
    auto_test(addition)

Executing wiht pytest, everything passes. This should not pass, as the return value is an int and not a str, as given in the type hint.
Using hypothesis-auto 1.1.4 and hypothesis 5.37.4

Typo: Scenerio should be Scenario?

I think that's the correct word:

The Collaborative International Dictionary of English v.0.48 (gcide)
Scenario Scena"rio, n. [It.]
A preliminary sketch of the plot, or main incidents, of an
opera.
[1913 Webster]

WordNet (r) 3.0 (2006) (wn)
scenario
n 1: an outline or synopsis of a play (or, by extension, of a
literary work)
2: a setting for a work of art or literature; "the scenario is
France during the Reign of Terror"
3: a postulated sequence of possible events; "planners developed
several scenarios in case of an attack"

Feature request: support for pydantic types

Thank you for the really nice library!

This would be such an awesome feature to have:

from hypothesis_auto import auto_test
from pydantic import BaseModel, ValidationError, validator


class UserModel(BaseModel):
    name: str
    password1: str
    password2: str

    @validator('name')
    def name_must_contain_space(cls, v):
        if ' ' not in v:
            raise ValueError('must contain a space')
        return v.title()


def create_user(user: UserModel):
    print(user)
    assert True


print(UserModel(name='samuel colvin', password1='zxcvbn', password2='zxcvbn'))
auto_test(create_user)  # Throws `pydantic.ValidationError` :-(

Vulnerability in dependency

The dependency hypothesis has a Race Condition vulnerability.

-> Vulnerability found in hypothesis version 5.49.0
   Vulnerability ID: 59726
   Affected spec: <6.0.4
   ADVISORY: Hypothesis 6.0.4 includes a fix for a Race Condition vulnerability.https://github.com/HypothesisWorks/hypothesis/pull/2783
   PVE-2023-59726
   For more information, please visit https://pyup.io/v/59726/f17

Note, this is also the main blocker why your project timothycrosley/portray can not be updated without any linting errors ๐Ÿ˜…. (At least to my understanding.)

superset of pytest-quickcheck?

See also pytest-quickcheck.

If hypothesis-auto is a superset of pytest-quickcheck, it might make sense to also provide a similar interface, as having autogenerated inputs for test cases is something very useful in its own.

Note that pytest-quickcheck's random input generation is flawed

NonInteractiveExampleWarning when using auto_pytest_magic

Using the demo code at https://github.com/timothycrosley/hypothesis-auto#pytest-usage:

from hypothesis_auto import auto_pytest_magic


def add(number_1: int, number_2: int = 1) -> int:
    return number_1 + number_2

auto_pytest_magic(add)

gives:

$ pytest test_1.py
=========================================================================== test session starts ============================================================================
platform darwin -- Python 3.9.4, pytest-6.2.3, py-1.9.0, pluggy-0.13.1
rootdir: /Users/hugo/github/slackabet
plugins: hypothesis-6.9.2, requests-mock-1.8.0, rerunfailures-9.1.1, flaky-3.7.0, mockito-0.0.4, cov-2.11.1, xdist-2.2.1, testmon-1.0.3, timeout-1.4.2, forked-1.3.0
collected 50 items

test_1.py ..................................................                                                                                                         [100%]

============================================================================= warnings summary =============================================================================
../../../../Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/hypothesis_auto/tester.py:126
../../../../Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/hypothesis_auto/tester.py:126
  /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/hypothesis_auto/tester.py:126: NonInteractiveExampleWarning: The `.example()` method is good for exploring strategies, but should only be used interactively.  We recommend using `@given` for tests - it performs better, saves and replays failures to avoid flakiness, and reports minimal examples. (strategy: builds(pass_along_variables))
    yield strategy.example()

-- Docs: https://docs.pytest.org/en/stable/warnings.html
====================================================================== 50 passed, 2 warnings in 1.55s ======================================================================

That warning again:

/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/hypothesis_auto/tester.py:126: NonInteractiveExampleWarning: The .example() method is good for exploring strategies, but should only be used interactively. We recommend using @given for tests - it performs better, saves and replays failures to avoid flakiness, and reports minimal examples. (strategy: builds(pass_along_variables))

Dictionary parameters does not seem to work

Declaring my parameters with dict, I was expecting the dictionaries strategy to work out of the box but it does not seem to be so. I was only seeing {} as generated values even with 100 runs.
So, I tried alternative type hints:

Example test case:

def my_function(one: dict, two: Dict[str, int], three: Mapping[str, int]):
    print('one', one, 'two', two, 'three', three)

With such examples too, I'm only seeing outputs like:

one {} two {} three {}
one {} two {} three {'': 0}

I think the issue is it should be using builds(dict) at the very least but it's not.

Add git tags

Hello,
It looks like GitHub is missing all tags for releases, could you please add them?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.