sstephenson / bats Goto Github PK
View Code? Open in Web Editor NEWBash Automated Testing System
License: MIT License
Bash Automated Testing System
License: MIT License
I'm investigating using bats
instead of urchin
for nvm
(see issue nvm-sh/nvm#429 for context) but one thing that I don't see how to implement in bats
is testing nvm
in different shells -- zsh, etc.
I know that the project is called "bash automated testing system", so it may be a long shot, but -- is there any intention to support testing other shells?
Hi Sam,
I'm currently using bats but have no idea how to dig deeper in a test case when something goes wrong.
Example testcase:
@test "build documentation" {
[ -d "doc_dir/" ]
pushd doc_dir/
[ -e "Makefile" ]
make
[ -e "documentation.pdf" ]
popd
}
So if any of the above fails, I only see that the test fails and need to debug the failing line by hand. In bash I can do a set -x
to see the lines as they are executed. Is it possible to do something like this in bats?
Thanks
Sven
Basically I would like to be able to do the following so I can test my code is only printing color codes to TTYs:
@test 'should render color escape sequence for TTYs' {
# where normally I'd have:
# run myFunctionOrAlias "myArg1" "myArg2"
fake_tty myFunctionOrAlias "myArg1" "myArg2"
[ "$status" -eq 0 ]
# not even sure how I'd really test for said code yet, but you get the idea...
[ "${lines[0]" =~ "31m" ]
}
I'm imagining that instead of executing code wherever run
does now, it would do something like:
script -c '"myFunctionOrAlias "myArg1" "myArg2"' $BATS_TMP_EXEC_OUTPUT
# convert $BATS_TMP_EXEC_OUTPUT to $lines
Hello,
I am evaluating bats for running installation and end-to-end cli testing of our project. So far so good, but one issue I have is I would like to be able to inspect output of the installation steps.
I use normal @test
taks to do installation steps. It looks like something is logged as /tmp/bats.PID.out
and then deleted.
What I would like to have would be something like:
--verbose
option to watch the progress of all tests (output + stderr)I will be happy to contribute one or both features, as long as you are fine with that.
Right now, I have to use the github URL.
Bats can't read from stdin
, which makes it impossible to run a command with an isolated rcfile:
#!/bin/bash
startdir="$(dirname "$0")"
bats <<EOF
@test "this better be true" {
cmd --rcfile="$startdir"/rc # FIXME doesn't work
[ 1 -eq 1 ]
}
EOF
Instead of reading from STDIN
, bats prints out its help usage:
bats <<EOF
@test
EOF
# OUTPUT:
Bats 0.4.0
Usage: bats [-c] [-p | -t] <test> [<test> ...]
A work around would be to allow users to specify variables that can be used globally within the bats file, however, bats should still probably be able to read from stdin
regardless.
bats -v startdir="$(dirname "$0")" test.bats
I am currently using bats (via way of busser-bats via test-kitchen + kitchen-vagrant) ... I am not sure this is even the correct layer to ask this question but here goes...
I wish to run the bats test suite as the "vagrant" user (or in a general sense, the original user used to shell into my test-kitchen host)
At the moment it appears that my bats test suites run as the "root" user (likely via some sudoing done in the test-kitchen/busser/bats integration) and I have to work around this inorder to test certain things (like rvm installations for my user)
Below is an example of how I get around the issue to test that I have the right version of ruby
@test "has default ruby version 2.0.0p195" {
sudo -E -u $SUDO_USER bash -l -c 'ruby -v | grep 2.0.0p195'
}
This works ok, but it sorta clutters up the test suite code (I am currently working on making it a re-usable function) but I was wondering if there was anything I could just configure to make it use the "logged in user" by default?
Side note, the user switching also causes issues with SSH agent forwarding ENVs and to make things as simple as possible I would like to execute my test suite as the logged in user, and then sudo -uroot
where necessary (to prevent double user switching hops and having to carry the ENV through each one)
It seems like the stderr is captured and suppressed when the test is successful. Shouldn't there be a command flag to show it even upon success? For now, I have to force a fake failure to see the stderr.
Formatted Travis output here, raw output here. Maybe it needs --tap
instead of --pretty
because of the special characters? Seems like it's probably a Travis issue rather than a bats
issue...
When run locally, the output looks like this:
/usr/local/bin/bats --pretty spec/integration/linter_test.bats spec/integration/version_test.bats
✓ builder correct lints a valid Builderfile�
✓ builder exits 5 when asked to lint an invalid file�
✓ builder exits 17 when asked to lint an invalid file�
✓ short version is set by compile�
✓ long version is set by compile�
✓ branch is set by compile�
✓ rev is set by compile
7 tests, 0 failures
...and when run on Travis CI, the output looks like this:
/usr/local/bin/bats --pretty spec/integration/linter_test.bats spec/integration/version_test.bats
pilepile
pilepile
pilepile
pilepile
Just by bumping my OS version from Ubuntu 12.04 to Ubuntu 14.04, the bats runner starts failing when run via test-kitchen/busser/busser-bats
. The busser-bats
plugin has version 0.3.1 vendored in it.
3 tests, 0 failures
Command [/tmp/busser/vendor/bats/bin/bats /tmp/busser/suites/bats] exit code was 1
Any idea? I couldn't correlate to any of the other reported issues.
Attempting to use the default script:
../libexec/bats
Yields the following error:
/usr/local/bin/bats: line 1: ../libexec/bats: No such file or directory
This seems to be due to the script not being able to locate libexec relative to the script's local directory. This corrects that problem:
$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)/../libexec/bats
But yields the following error because it is not recognizing the file being passed to it:
Bats 0.2.0
usage: /usr/local/bin/../libexec/bats [-c]
I am using the following adjustment to the script to get bats running correctly:
$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)/../libexec/bats "$@"
I am testing with GNU bash, version 4.1.10(4)-release (i686-pc-cygwin)
Anyone know of a way I can essentially exit and entire test run if something fails on one test?
I might have a case where I would want to not run anything else if something happens.
The following test does not appear to work:
@test "Test with filtered stderr" {
cat > "script.sh" <<< "echo stdout ; echo stderr >&2"
chmod +x foo.sh
run ./foo.sh 2>/dev/null
assert_output "stdout"
run ./foo.sh >/dev/null
assert_output "stderr"
}
✗ Test with filtered stderr
…
`assert_output "stdout"' failed
expected: stdout
actual: stdout
stderr
This appears to be related to using a shell script.
Handling the stderr from e.g. grep
seems to work.
I came across this via pyenv, which uses (bash) scripts as wrappers, too.
bats 0.3 can't run the rbenv test suite. With the old formatter, it passes:
$ bats -t test/commands.bats
However:
$ bats test/commands.bats
commands1/4
/Users/mislav/.coral/repos/bats@sstephenson/libexec/bats-exec-test: line 121: echo: write error: Broken pipe
Hello,
after seeing the screencast by @drnic i'm trying out bats in test-kitchen. But 'im having trouble with the most basic test
#!/usr/bin/env bats
@test "Test if GitHub is in the global ssh_known_hosts file" {
run cat /etc/ssh/ssh_known_hosts | grep github
[ "$status" -eq 0 ]
}
With | grep github -> failure
Without | grep github -> succes but this doesn't test what i want it to test
Running by hand exit value is 0
From the neuro.debian.net (see http://neuro.debian.net/pkgs/bats.html) and also were uploaded to Debian (will need to go through NEW queue first).
The only change I had to do is to install under /usr/lib/bats (probably should have been /usr/share/bats) instead of /usr/libexec
$> cat debian/patches/deb_libexec_to_lib
--- a/install.sh
+++ b/install.sh
@@ -28,9 +28,11 @@ if [ -z "$1" ]; then
fi
BATS_ROOT="$(abs_dirname "$0")"
-mkdir -p "$PREFIX"/{bin,libexec,share/man/man{1,7}}
-cp -R "$BATS_ROOT"/bin/* "$PREFIX"/bin
-cp -R "$BATS_ROOT"/libexec/* "$PREFIX"/libexec
+mkdir -p "$PREFIX"/{bin,lib/bats,share/man/man{1,7}}
+# debian: do not install since symlink anyway points to libexec
+#cp -R "$BATS_ROOT"/bin/* "$PREFIX"/bin
+cp -R "$BATS_ROOT"/libexec/* "$PREFIX"/lib/bats
+ln -s ../lib/bats/bats "$PREFIX"/bin/bats
cp "$BATS_ROOT"/man/bats.1 "$PREFIX"/share/man/man1
cp "$BATS_ROOT"/man/bats.7 "$PREFIX"/share/man/man7
I'm working on a small personal project and I am trying to stub out the call to hostname
to always return the same thing (no matter the machine it is executed on).
Basically, in my test file I just redefine hostname
to echo what I want, but when I actually run my code with run
, that definition isn't seen and calls to hostname
end up returning my machine's hostname.
Am I doing this incorrectly, or is this not supported?
It would be nice if $output, $lines and $status could be cleanly logged somewhere after each run.
I had to echo lots of variables to tmp files to see what the output is?
What do you think? Maybe even have a --debug flag to just echo that to the test output results?
It would help stream line assertions.
If you can point me in a direction in some of your core bin files, I will just fork and throw ya a pull request with the work.
Does anyone have any idea how someone could have an existing test, but then call that existing test as a "function" and maybe make some new assertions using $1 or some flag to pass to that function during this test run to run new additional tests because that test is now different where new assumptions would be made based on data that has changed in the output.
I basically am building up tests and I do not want to copy and paste logic from one test into another if I can help it.
Any ideas?
I realize that teardown/setup are called for each @ test, but can you give me an idea of how I might simply delete a file once all tests are complete?
I plan on deleting a cookies file in curl with a session after all tests are complete, but if the file doesnt exist. Login to the app via curl and then only login once.
I just need some advice here and each @ test, just runs, there is no test suite callback for multiple tests when all of them complete.
Hi,
I'm extensively using the bats testing framework in trousseau. I have written a bunch of tests. Everything works fine on my laptop (OSX), and in vagrant boxes (Ubuntu 14.04), however when I run the tests in a CI environment like codeship or travis I noticed that bats always end up stuck on the last test of each files.
For example:
~/bats/bin/bats -t tests/create.bats
1..6
ok 1 create store with one valid recipient succeeds
ok 2 create symmetric store succeeds
ok 3 create generates a file in 0600 mode
ok 4 create store with one invalid recipient fails
ok 5 create store without a recipient fails
ok 6 create store with one valid recipient and one invalid recipient fails
# Hangs here, forever
Any ideas what could cause this? a wait4 syscall never ending? A SIGCHLD never received? An improper redirection of stdout/stderr hiding the real problem?
Thanks for your help :-)
Theo
https://github.com/juanibiapina/dotfiles/blob/master/zsh/tests/zshconfig/test_helper.bash
This is just something I wrote to get better feedback when using bats and tdd.
Do you think you want to include something like that?
My tests have some expensive but shared setup that needs to be done before running all the tests. The setup can be shared, so it would be nice if I could do it once per .bats file.
I refreshed my Jenkins environment and thought I run a number of jobs that I installed 1/2 years ago. One of them is bats.
Although the project run without failing tests 2 years ago I now get failing tests
49 tests, 42 ok, 7 not ok, 0 skipped, 0 Bail Out!
I can provide more details if needed, but before spending more time on it I thought I'd check in with you.
@sstephenson I realize this request may go against the recommended usage for bats, so please forgive me. Also, if this feature already exists, sorry. I'm trying to figure out a way to print output to STDOUT
during a test. This feature would be great in either a .bats
test script or in a script of helper functions (.bash
file), especially when the application developer has some long running process they're waiting for, e.g. to print status messages while waiting on the process to finish. The motivation for this capability would be for general debugging of particular unit tests and for integration with other tools e.g. Jenkins.
Below is an example scenario:
longRunningTest.bats
#!/usr/bin/env bats
load helper_functions
@test "Test some long running process" {
run longRunningProcess
echo >&2 "$output"
[ $(echo $output | grep "Test is passed") ]
}
helper_functions.bash
#!/bin/bash
longRunningProcess() {
test_again=true
while [ "$test_again" = true ]; do
echo "Status printed here would be great! :) "
if [[ isProcessComplete ]]; then
test_again=false
echo >&2 "Test is passed"
fi
done
}
Perhaps to implement such a feature, bats could internally write to a temp file (e.g. using mktemp
) rather than using STDOUT
to construct it's TAP output, etc.
What are your thoughts?
Bats is really useful. May I suggest adding it to the wikipedia page for TAP?
g.
I am newbie to bash shell programming. Just found this great tool thought give it a shot. one weird thing I met is when all the cases are done. It prints
"ok 1 xxxx
ok 2 xxx
ok 3 xxx"
and it just stops there. only ctrl+c can make it rest.
Some of cases are invoking shell scripts directly.
Would like to know is there anything need to know writing bash program so it can be tested by bats?
Thanks
I'm writing acceptance tests for Packer and each test takes at least a minute to run since it is spinning up a real AWS instance for each test.
There is no reason the test cases can't run in parallel. It'd be nice if bats handled this. I don't think it'd be too difficult with bash's built in job handling.
I am sourcing my functions in the test as part of setup so:
setup() {
. ./my_functs
}
I like to run my bash scripts with "set -o nounset" but get this error on exit.
/usr/lib/bats/bats-exec-test: line 246: BATS_TEST_SKIPPED: unbound variable
Is this a bug or am I just doing it wrong?!
In rbenv test suite I have a couple of assertion functions in test/test_helper.bats
. When a test in e.g. test/exec.bats
fails, the filename for the failure is reported properly but the line number comes from the implementation in test/test_helper.bats
and therefore doesn't match the proper location in the original file.
This is the testcase, which demonstates the failure:
#!/usr/bin/env bats
@test 't' {
#set -x
command="echo -e"
$command I SUCCEED
set -o >before.txt
run true
echo $command
set -o >after.txt
$command I FAIL
}
Tracing reveals that I FAIL line is called as: 'echo -e' I FAIL
and the interpreter tries to call exactly the quoted command.
All my tests take multiple minutes to run. I categorize them into separate files but it would still be nice to provide a string of a test name to only run that test to bats
.
I'd like to be able to log stuff (e.g. the $output, $status) in order to get a grip on what causes the test to fail.
Not sure I understand TAP enough, but it would be useful to more easily see when something fails. I am happy to do it and submit a pull request, but thought I would get some input first. Thanks
Using the simple test provided on the introductory page:
#!/usr/bin/env bats
@test "addition using bc" {
result="$(echo 2+2 | bc)"
[ "$result" -eq 4 ]
}
@test "addition using dc" {
result="$(echo 2 2+p | dc)"
[ "$result" -eq 4 ]
}
Test yields the following error:
/tmp/bats.5380.src: line 13: syntax error: unexpected end of file
I am testing with GNU bash, version 4.1.10(4)-release (i686-pc-cygwin)
Debian uses /bin/dash for /bin/sh. Will bats work with the Debian almquist sh? Has anyone tested with other shells? This would be nice to know up front in the documentation.
@test "Pinging server" {
run curl -I http://localhost:8088/tracks?key=9 | head -n 1| cut -d $' ' -f2
[ "$status" -eq 200 ]
}
Essentially, this means, grab the status code?
But i get the following error:
`[ "$status" -eq 200 ]' failed with status 2
/var/folders/7w/h7x7t8wn66v8bts2_cr1d3sm0000gp/T/bats.87996.src: line 10: [: : integer expression expected
Looks like the teardown is executed inside an 'eval' so all of my attempts to suppress the following output have proven ineffective:
[redacted]:tests jonathan$ bats end-to-end.sh
/usr/local/libexec/bats-exec-test: line 181: 94073 Terminated: 15 eval ${cmd} (wd: ~/[redacted])
✓ requests from same client are routed to a single server
1 test, 0 failures
On your friendly neighborhood OpenBSD 5.4 box, install rbenv and then try to run the tests:
$ cd .rbenv
$ bats test
env: bash: No such file or directory
env: bash: No such file or directory
env: bash: No such file or directory
...
142 tests, 0 failures
$
There are 142 of the "env: bash: No such file or directory' lines, I didn't paste all of them for the sake of brevity. Bats doesn't notice any failures, though it is, in fact, not succeeding--I verified that by removing various lines from the test that cause failures when running the tests on OX 10.9.
My bash-fu is very limited...I added echo statements all over the place to try and figure out what's going wrong. It seems as if the name of the temp file that is generated is different than the name of the temp file that is expected when finally trying to run the tests. I noticed that the temp file generated was different every time, but when I ran the tests manually, it always looked for the same tmp file
$ /usr/local/libexec/bats-exec-test -x /home/scott/.rbenv/test/commands.bats test_commands_-2d-2dno-2dsh 5
/usr/local/libexec/bats-exec-test: line 278: /tmp/bats.31280.src: No such file or directory
$
So that's something.
Until this mystery is solved this baby will have to wait: rbenv/rbenv#520
The following test suite
teardown()
{
unknown_command_4711
}
@test "unexpected pass" {
[ 1 -eq 2 ]
}
passes, although it should not.
Replacing teardown with setup forces the test suite to fail as expected.
I am migrating some existing tests I wrote in bash to bats. In my testing I am comparing results of an exec with known good results stored in a file in my spec dir. There seems to be no easy way to access the filename/path of the executing test within the test. Would it be possible to set this as an environment variable or make the original argv available in the tests?
I'm having an issue where all bats tests are passing.
@test "sanity" {
[ true = false ]
}
This is passing for me.
I sourced my lib.sh file in test_helper.bash. This caused the tests to run correctly again, but soon afterward all my tests started passing again.
Try this in bash:
user@machine:~$ set -e
user@machine:~$ [ 1 -eq 1 ]
user@machine:~$ [ 1 -eq 0 ] # shell exits
user@machine:~$ set -e
user@machine:~$ [[ 1 == 1 ]]
user@machine:~$ [[ 1 == 0 ]] # expected to exit, but doesnt
user@machine:~$ echo $? # however...
1
user@machine:~$
I managed to convert all my unit tests from shUnit2 to bats before I noticed that something was off. I was using the double bracket notation everywhere because it's just nicer to work with. Little did I know that the set -e
shell option totally ignored those double brackets.
Best of all, the docs don't mention that fact explicitly. It's because the [[
is part of the conditional constructs, which of course do not trigger that shell option.
While writing a test that relied out the output of redis-cli, I was having issues matching the $output variable. Finally, echoing $output through cat -vt showed me there was a \r (^M) stuck on the end of the line.
redis-cli outputs \r\n as line endings even on unix systems for some odd reason.
As a workaround, i piped it through perl -pe 's/\W*$//'
You can test this by running:
run echo -en "0\r\n"
echo "output: $output" | cat -vt
[ "$output" = "0" ] # fails
I wrote a handy traceback do-hickey for bash a while back...
https://docwhat.org/tracebacks-in-bash/
It works pretty well. The biggest caveat is that trap
handlers get lost on subshells and bash (depending on the version) creates subshells in some unexpected places.
Ciao!
Here follows a bats that fails. First run passes,not the second.
loop_func() {
local search="none one two tree"
local d
for d in $search ; do
echo $d
done
}
@test "loop_func" {
run loop_func
[[ "${lines[3]}" == 'tree' ]]
run loop_func
[[ "${lines[2]}" == 'two' ]]
}
outputs:
bats loop.bats
✗ loop_func
(in test file loop.bats, line 14)
`[[ "${lines[2]}" == 'two' ]]' failed
1 test, 1 failure
Up until now, I've been intuitively assuming that outside of add ons like @test
blocks bats
tests were just Bash. I could use whatever Bash syntax I wanted in a .bats
file.
Today I ran into a clear case where this isn't so. At the top of a test file, I have the following:
#!/usr/bin/env bats
if [ ! -x "../spacecat" ]; then
printf "No executable ../spacecat found.\n"
printf "Run 'make' in the top-level and try again.\n"
exit 1
fi
The problem though is that stdout
must remain TAP-compliant. This hadn't occurred to me, and the error was opaque:
$ ./runner.bash
/usr/local/Cellar/bats/0.3.0/libexec/bats-format-tap-stream: line 103: [: executable ../spacecat found.: integer expression expected
/usr/local/Cellar/bats/0.3.0/libexec/bats-format-tap-stream: line 59: printf: executable ../spacecat found.: invalid number
0 tests, 0 failures
I'm happy to add something to the README
to clarify, but are there any other limitations on what should go in a bats file I should mention?
It would seem that tests that are set to skip, still get setup() and teardown() executed. Take this for example.
#!/usr/bin/env bats
setup() {
sleep 30
}
teardown() {
sleep 30
}
@test "ps" {
skip "test setup/teardown on skip"
run bash -c "echo hello"
echo "output: "$output
echo "status: "$status
assert_success
}
# time bats test.bats
- ps (skipped: test setup/teardown on skip)
1 test, 0 failures, 1 skipped
real 1m0.061s
user 0m0.013s
sys 0m0.038s
Is this expected behavior? Is it possible to make this not the case? Thanks for putting this project together. It's been great to work with!
How would you feel about putting the "plan" output (1..n
) at the end rather than the beginning of the run? The advantage is that you don't have to keep the user waiting before printing output. I agree with your choice not to force people to manually specify the number of tests, but I would prefer not to wait for the output. The plan at the end is still TAP-compliant, so for example, BATS could still work nicely with prove
or other such tools.
If you're interested, I'm happy to help make it happen.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.