mixxorz / behave-django Goto Github PK
View Code? Open in Web Editor NEWBehave BDD integration for Django
Home Page: https://pythonhosted.org/behave-django/
Behave BDD integration for Django
Home Page: https://pythonhosted.org/behave-django/
It should return 1 if the tests fail, and 0 if the tests pass. I found out because my CI was passing even if it had failing tests.
Interestingly, the setup instructions of version 0.2.2 on PyPI still list the old setup procedure, requiring to add code to environment.py
.
Instead, this should match the setup instructions of the package documentation.
Some features of behave-django
are not available to Django < 1.5. As such, there should be a way of skipping expected-to-fail-tests on particular Django/Python version combinations on the CI.
The proposed solution is to write a test script (test.sh
) that will be called by the CI instead of calling python manage.py behave --tags ~@skip && python tests.py
. By doing so, we will be given access to the testing environment variables which can help in modifying which tests run on which Django/Python combo.
behave-django should be added to the Testing section of Roberto Rosario's awesome-django list.
(Just to get this off my chest)
Maybe you have a "marketing" label for that. ๐
This was caused by incompatible reStructuredText in README.rst. Minor issue. Will be fixed in the next release.
Since we're using Django's StaticLiveServerTestCase internally to set everything up, we should have the ability to load our fixtures. Here's my proposal on how this will work.
In environment.py
, before the call to behave-django's environment.before_scenario()
, we can load our context with the fixtures array.
def before_scenario(context, scenario):
context.fixtures = ['user-data.json']
environment.before_scenario(context, scenario)
behave-django would then pass this on to the test case and your fixture will be loaded.
If you wanted different fixtures for different scenarios:
def before_scenario(context, scenario):
if scenario.name == 'User login with valid credentials':
context.fixtures = ['user-data.json']
elif scenario.name == 'Check out cart':
context.fixtures = ['user-data.json', 'store.json', 'cart.json']
environment.before_scenario(context, scenario)
You could also have fixtures per Feature too
def before_feature(context, feature):
if feature.name == 'Login':
context.fixtures = ['user-data.json']
# This works because behave will use the same context for everything below Feature. (Scenarios, Outlines, Backgrounds)
def before_scenario(context, scenario):
# You wouldn't need to change anything
environment.before_scenario(context, scenario)
Does this look good? Does anyone have any other suggestions?
As of today, the documentation is in the repository's Wiki. It should be moved to a docs folder in the project repository and converted to reStructuredText. This way pull requests can include changes to the documentation that reflect the changes made to the source code.
Things to do afterwards:
The docs at pythonhosted.org should say version 0.2.0 but they still say 0.1.4.
So apparently, and I only just found out about this now, you can specify your features directories (among other things) using behave's configuration file. I found out about this while reading the source code, trying to see if there's a less hacky way of executing behave progmatically.
If everything works out well, you'd just need to make a .behaverc in your project's root directory, and behave will get its settings from there.
In lettuce, you can specify which feature file to run, and further narrow it down by scenario:
http://lettuce.it/reference/cli.html#lettuce-running-only-some-scenarios-all-feature-files
Is this currently possible in behave-django? It's an important feature, and I'd like to document it if it's already built in.
As far as I know, this isn't possible in django-behave. See: django-behave/django-behave#58
Thanks for this useful package !
When trying to use this behave feature through behave-django
, I got this error :
$ ./manage.py behave --define debug_on_error=on
Creating test database for alias 'default'...
Traceback (most recent call last):
File "./manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/[site-package]/django/core/management/__init__.py", line 338, in execute_from_command_line
utility.execute()
File "/[site-package]/django/core/management/__init__.py", line 330, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/[site-package]/django/core/management/base.py", line 390, in run_from_argv
self.execute(*args, **cmd_options)
File "/[site-package]/django/core/management/base.py", line 441, in execute
output = self.handle(*args, **options)
File "/[site-package]/behave_django/management/commands/behave.py", line 80, in handle
exit_status = behave_main(args=behave_args)
File "/[site-package]/behave/__main__.py", line 53, in main
config = Configuration(args)
File "/[site-package]/behave/configuration.py", line 622, in __init__
self.setup_userdata()
File "/[site-package]/behave/configuration.py", line 750, in setup_userdata
self.userdata.update(self.userdata_defines)
ValueError: dictionary update sequence element #0 has length 17; 2 is required
The bug occurs in behave_django.management.commands.behave.get_behave_options
function : (source code) which in place remove some keywords from thebehave.configuration.options
config.
The removed keywords (here type
) are then deleted for the whole process, and behave does not parse correctly his args any more.
This can be fixed using a copy of the keywords
dict.
Pull request will come this afternoon.
Django's testing suite has an option to pass --keepdb
to avoid having to create and destroy the test database between each test run. You should be able to use python manage.py behave --keepdb
to utilize this functionality.
The tests are a mess. I'm looking into using tox to better organize and automate testing. Maybe also move to using nose
to be more pythonic.
CONTRIBUTING.md
should be converted to reStructuredText (.rst), and then included in the docs generation process by Sphinx.
Sometimes in integration testing it may make sense to run BDD tests against an existing database, such as a copy of a production database. -- This is fundamentally different to unit tests where we always want a predefined state for testing against a predefined result.
At the moment this is not possible with Django and behave-django, because the django_test_runner
creates and destroys, and the underlying LiveServerTestCase
modifies and flushes the database used for testing.
The solution may provide a command line switch, e.g. --no-test-setup
, that makes clear that the current default settings are used, identical to running python manage.py runserver
directly.
It doesn't right now. It should tho.
It has something to do with redirecting to /dev/null
apparently.
Currently, it bugs out when you run behave if BEHAVE_FEATURES doesn't exist in your settings. That should not be the case.
When I run python manage.py behave feature/some-feature.feature
, behave runs everything. I installed 0.2.0
and it accepted the positional arguments. We had a regression. We should add a test for this.
Currently, our tests don't cover Python 3.5. Django 1.9 officially supports Python 3.5, and we test Django 1.9 in tox.ini
and .travis.yml
, so we should test with Python 3.5 and supported Django versions too.
Note that in Travis CI you have to use python: 3.5
to specify the image you run the tests on. Otherwise python3.5
will not be available (yet). See this comment for more information.
.travis.yml
has to be updated accordingly.
The definition of basepython = py26 ...
is redundant. This is provided by tox by default, and can be omitted.
Say I write a step like this :
Given there is this user :
"""
username: Bender
password: (\/) (;,,;) (\/)
email: [email protected]
"""
(because yeah, deep down, Bender is rooting for Zoidberg)
And define it like this :
@Given(u"there is this user :")
def there_is_this_user(context):
# do stuff with context.text in YAML format
The trailing :
screws up matching !
I have to define it like this, with a trailing space :
@Given(u"there is this user ")
Maybe it's a behave
issue, not a behave-django
issue, but I don't know the codebases.
Can you help me ?
I know I can easily go around this problem by using the regex parser instead, but I'm trying to teach BDD to people that don't know first thing about regexes and we all need them to learn BDD fast.
I noticed behave.py
uses optparse
.. I think we should switch to argparse
as it's better supported.
From https://docs.python.org/2/library/optparse.html
Deprecated since version 2.7: The optparse module is deprecated and will not be developed further; development will continue with the argparse module.
I'll try to make a PR for this at some point, unless you get to it before I do.
Support for.. supported version of Django. Django 1.7.7 and 1.8 is already tested. Making the tests pass Django 1.4.20 will probably take some tweaking.
I encountered a problem where manage.py loaddata some_data.json
would work. Having a TestCase
with fixtures = ['some_data.json']
and running manage.py test
would work. But declaring a fixture in my environment.py
file raised a ContentType
exception while loading the fixture.
I don't know why it happens, but I know how to fix it. Instead of calling loaddata
directly in behave-django code, we should just attach set the fixture
variable on the TestCase
instance attached to context
. It worked when I did this.
It's still technically using the public API of TestCase
so I think it's okay. I'll make a PR later on for this.
Currently, behave-django always creates the database. It would be better if it didn't during a --dry-run
.
This is causing some issues with another project I'm working on.
Is there any reason I should use this library instead of django-behave, which I'm currently using?
I'm happy integrating Behave and Django was really easy with behave-django
. (Awesome, thank you!)
Also, the context.base_url is a neat shortcut. Though, I believe it's fairly common in a BDD test to reverse URLs using the reverse name of a resource, and feed the resulting absolute URL to selenium.
So, why don't we provide a get_url()
function, or a get_absolute_url() after the example of django.db.models.Model
? The function could return:
context.get_url()
)context.get_url(path='/blog/page/1')
)context.get_url(reverse_name='blog-comment-list')
)An implementation could look something like this:
from django.core.urlresolvers import reverse
def get_absolute_url(path=None, reverse_name=None):
"""Helper attached to context for getting URLs either with reverse name or URL path."""
if path is not None:
return context.base_url + path
elif reverse_name is not None:
return context.base_url + reverse(reverse_name)
else:
return context.base_url
What do you think?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.