xparq / space_test Goto Github PK
View Code? Open in Web Editor NEWAhh, OK, so it's "space" for a different reason... ;)
Ahh, OK, so it's "space" for a different reason... ;)
Space Test Superfrugal autotest framework, in case you got stranded on a dead planet (See _engine/VERSION to identify the instance.) Features: - ...umm, supports spaces in test case names? (But seriously: make tools (and many other utilities) are traditionally hostile to files with spaces in their names, so in order to keep the chilling frugality of this tool (to only depend on whatever sh-like shell it can find to run itself), and its stubbornness to just run, dammit, no matter what, constantly overcoming space-issues all the way through has been perhaps the single biggest challenge. (See e.g. GH issue #25, #42...) Oh, BTW, see, even GitHub still can't support spaces in repo names! ;) ) Anyway: - Very small, frugal, self-containing (just a bunch of shell scripts + some change) - Depends on basically nothing (but some sh shell -- a BusyBox.exe for Windows has been packed, to be sure, but Git's bash, or WSL should be fine), or and of course the toolset you may want to use for (auto)building stuff for the test cases, if needed) - Everything-agnostic, can help testing anything (and basically anywhere) - Filenames are the test titles (Which is the main motivation for "aggressive" space (and other misc. char) support in paths.) - Despite the minimalism and early stage, still fairly comfy & flexible: - single-file or subdir test cases - cases can have a custom variation (build) of the (common) test subject - or basically any arbitrary test env/setup whatsoever - trivially lean & simple script format: RUN something EXPECT some result - or as shell commands (as opposed to directly exec'ing; built-ins like `echo` must be SH'ed): SH echo -n this EXPECT this SH echo newline EXPECT "newline " SH "echo Dir list:; ls -r" SH echo command 3 EXPECT "Dir list: Hi from the test case dir! CASE command 3 " - or mix in any normal .sh (BB ash) syntax: if [ -n "$SOME_FLAG" ]; then prepare_something # no output capture RUN stuff # output captured as usual EXPECT "one thing" # quoting these is a good habit! else prepare_something_else RUN stuff EXPECT "other thing" fi - Multiple test (variant) runs in a single case: RUN some command RUN other command RUN command --with-params EXPECT "some result some other results accumulated " # (Assuming those commands print with trailing newlines.) or, equivalently, with interleaved EXPECTs: RUN some command EXPECT "some result " RUN other command EXPECT "some other results " RUN command --with-params EXPECT "accumulated " - or standalone "EXPECT" files (overriding any EXPECT clauses) - Check for command exit status: EXPECT_ERROR RUN thing_returning_nonzero EXPECT_ERROR 4 RUN thing_that_should_return_4 EXPECT_ERROR ignore # For all subsequent steps (until another EXPECT_ERROR) RUN retval_doesnt_matter --retval 0 RUN retval_doesnt_matter --retval 1 EXPECT_ERROR warn RUN make -s something # retval will be noted, but not used - Arbitrary multi-level test tree hierarchy - Manual (forced) FAIL, PASS, ABORT - Arbitrary multi-level test tree hierarchy - Flexible runner: `run` to test all, or `run some*`, or `run this "and that"` And no need to be in the test dir, so e.g. `test/run` from the prj. dir (if it's the parent of `test`) would also be fine. BTW, since there's no reliable way to identify a test dir (you see, it's so flexible, it can be anything!...), it's just assumed to be the parent of the `_engine` dir currently, or whatever the `TEST_DIR` env. var. points to, if set. If the test dir is not `_engine/..`, it's recommended to have a simple proxy `run` script there (e.g. just a one-liner, executing the real one at `.../_engine/run[.cmd]`), which could then conveniently set the TEST_DIR variable, and/or pass test case filter params. etc. (Just symlinking to `_engine/run` doesn't work yet, but will: #51.) NOTE: outside of Windows + BusyBox (the default for me), wildcard patterns may need to be in single quotes ('pattern*') to work as expected! - "GitHub-Actions-ready" MSVC and GCC autobuild (for both test-subject and custom test-case code) BTW, I've hacked this together just for this exact purpose, in fact. (It's ridiculously primitive and limited yet, and C++ only, etc., but good enough for sales...) - Doesn't require CMake (e.g. to replace trivial single-line compiler commands with multi-100 megs of opaque, fragile, ugly complexity)
Updated it there to 0.08.
Fails at the GHA right now, for missing BB... -> #22
That should also make it easier to "integrate" (copy...) into other projects, esp. upgrading existing ones (better separation from test data (artifacts) and the "engine")!
Well, following through from #11, let the config tell it.
(Originally: xparq/Args#20)
Regex support in the EXPECT
clauses could also do, but that won't address e.g. standalone EXPECT
files.
E.g., it was also a "wish item" in the Args README case, which is, indeed, has a split run-script vs. EXPECT
case setup:
# CLEAN s/[^ ]+.+\.exe/\.\.\.\.exe/
So, it's only needed for bridging over to subprocesses, but not when sourcing scripts:
I may have extra or missing export
s basically anywhere...
Too much incompatibilities...
E.g. the makefiles can't just glue the build script path uniformly to $_SH_
, because under Git sh it's /c/shit/..., while BB.exe has no problem with the normal C:/... paths... :-/
OTOH, if it's too explicit, or supports BB alone, then it may not just run fine in a native sh env as it should!...
But we could still stick to BB on Windows at least... (E.g. in cases where there's no $SHELL
yet? Nope: $SHELL
doesn't help with the Git sh vs BB_win32 path discrepancy!)
You see, that's exactly what I was talking about in #9...
(Originally: xparq/Args#33)
The .gitattributes
settings supposedly already ensure that it's that way in the repo.
However, I still had core.autocrlf
enabled locally, and either because of that or it was just never changed, (some of?) the scripts still had CRLF (in the work tree at least).
I recently dos2unix
ed them all, and also did a git add --renormalize
(which didn't change anything then, but suppressed Git's really inexplicable "...will change to CRLF" warning); let's see what happens next time I check something out and feed it e.g. to WSL's bash...
But still allow overriding it with any local Makefile* just like in the other test dirs.
That should also spare the extra subshell of SH echo!
Related ("dependent") issues (decisions that have simplified things a lot):
TEST_DIR/run (and run.cmd) has been used as the sole reliable anchor for locating the TEST_DIR.. Where that script is, there's the test stuff. And from there it just also knows that _engine is also there, right underneath.
The current logic is too brittle: it relies solely on $0
being in the _engine
dir, which is not the case when run
is being sourced from a script elsewhere, or if it was run via a symlink (at least on Windows; can't recall if sh resolves that symlink, hopefully not)!
Now at least TEST_DIR
can be set before calling run
(which would then accept it, no questions asked).
Support symlinks to _engine/run[.cmd]
as anchors of the test dir!
-> E.g.: https://github.com/cavo789/dos_batch_tips#getsymlinktargetpath
Not to mention the current feature of also accepting Git's sh!... :-o
The issue is that busybox
versions obviously differ vastly, so relying on "found" versions in a host env. is a hit-and-miss.
OTOH, bundling a version for every possible supported env. seems to be both futile and restrictive. (Or is it? First, it's only really necessary on Windows, and second, there seems to be a (32-bit?) "canonical" version at Ron's page anyway!...)
NOTE: now (-> #22) it's actually also being downloaded on-the-fly, too, so that's also an uncontrolled version dependency issue!
So, the tightening, and/or various levels of shell (mostly BB) version control part is still relevant!
Now it thinks everything's a test case.
Oh, and of course run_cases run_cases
is a nice infinite loop then! :)
I'm no shell programmer, you see. I've basically googled and manned and SXed all the .sh syntax & semantics together, so chances are I'm still missing a few glaring issues (even with such a simple language as ash
).
Because the shebangs still had to remain #!/bin/sh
-- but then it's a manifest inconsistency: sh doesn't support this syntax!
The implementations I encountered on Windows all still did tho, because many are just bashes in disguise.
So, all was well, until actually running in a real Linux shell, which took the shebangs seriously, and so the scripts broke...
I mean better... It's not obvious to first-time visitors that there's anything interesting here at all!
Yeah, well, sounds scarier than it really is (it's a relic I know of, and it's harmless), but still kinda facepalm...
(The thing, BTW, that makes the difference (but shouldn't), is having or not having the $TEST_CASE_FILE_EXT
in the TC dir's name.)
...so e.g. a missing test exe problem would be more obvious to recognize.
(Originally: xparq/Args#32)
While EXEC is a perfect counterpart of SH, and it also conveys exactly what it is, but only for devs...
I mean it's all for devs anyway, but for the overall style... It just reads shit. Like the examples in the README:
EXEC something
EXPECT some result
or
EXEC some command
EXEC other command
EXEC command --with-params
EXPECT "some result
....
Perhaps it just looks way too similar to EXPECT. Or is it just me?!
Maybe DO? Or just keep it RUN? Both are subpar for accuracy of meaning, but just taste better. Their genericity is also a plus, from the overall readability aspect (for high-level test authoring/maintenance), not just a minus (from the low-level technical test coding aspect).
I think I'll just keep it RUN for now, and revisit this if really becomes untenable.
Renaming run_cases is uncontroversial though. The mild crosstalk between RUN and run is kinda still annoying, but nothing that can't be lived with.
Well, the current 1-depth initial MVP is gonna be too tight a fit soon, obviously...
(Originally opened for Args (-> xparq/Args#30), but the test scripts now live here, so taking over...)
As an option, as that's entirely (intrusively) C++-specific!
-> e.g. https://github.com/xparq/regenv/blob/master/src/FindListPart.cpp
-> #25
Currently:
>bash -c ./run_cases general
./run_cases: 12: [: unexpected operator
./run_cases: 48: [: 0: unexpected operator
-------- THERE HAVE BEEN FAILED CASES!
Is even [ "x" == "y" ]
a bashism, with that ==
?!
-> shellcheck -s bash ...
Notes copied from a dup. (#48):
WSL:
.../prj/Space_Test/test$ ./run_cases
./run_cases: 213: [: gcc: unexpected operator
./run_cases: 280: [: gnumake: unexpected operator
./run_cases: 326: /mnt/c/sz/prj/Space_Test/test/_engine/init_once.sh: gnumake: not found
gnumake
dependency in functions.sh
is gross... It's not even called "gnumake" by default. ;)
make
would help even less here: a) it's only available there, on Windows, b) it's very limited, and c) there's likely an NMAKE also available there anyway...# If listing more than one dependency, separate them with ","
<-- or should it be ";"?!
Either way, eating up whitespace around the separator would also be a problem for the makefiles...
Ehheh, OK, fine, fine... I meant "spaces in explicitly named test case names", it seems... :)
Anyway, it was an easy fix: just replacing $*
with $@
in the main runner iterator loop, as per https://linux.die.net/man/1/ash:
@ Expands to the positional parameters, starting from one. When the expansion occurs within
double-quotes, each positional parameter expands as a separate argument. If there are
no positional parameters, the expansion of @ generates zero arguments, even when @ is
double-quoted.
What this basically means, for example, is if $1 is ''abc'' and $2 is ''def ghi'', then "$@" expands
to the two arguments:
"abc" "def ghi"
Oh yeah, obviously? And then the CASE
files, too, obviously, right? And then what about single-file test case scripts with natural text as their names? Should they also be "Startup with defaults.CASE"
? I mean not that "Startup with defaults.case"
would look any better, but capitals in the extension is a HIGHLY unusual habit, and may be offputting to some.
OK then... Make it configurable! :) See also #12 then!
run_cases "TC*"
- ERROR: Test case "TC dir with no suffix TC dir with no suffix and no script TC dir with suffix, but no script.case" not found!
Unquoted run_cases TC*
is fine.
This should fix the missing BB error in the Args GHA test run...
-> #20
$ run_cases *bui*
Riding "...\prj\Space_Test\test\_engine/busybox" sh...
"...\prj\Space_Test\test\_engine\busybox" sh -c ". build.sh \"test-demo\""
realpath: .../prj/Space_Test/test/*bui*: Invalid argument
ERROR: Test case "*bui*" not found!
-------- THERE HAVE BEEN FAILED CASES!
$ run_cases *exe*
Riding "...\prj\Space_Test\test\_engine/busybox" sh...
gnumake: 'test-demo.exe' is up to date.
ERROR: Test case "general.exe" not found!
ERROR: Test case "test-demo.exe" not found!
-------- THERE HAVE BEEN FAILED CASES!
-> #42
WSL:
.../prj/Space_Test/test$ ./run_cases
./run_cases: 213: [: gcc: unexpected operator
./run_cases: 280: [: gnumake: unexpected operator
./run_cases: 326: /mnt/c/sz/prj/Space_Test/test/_engine/init_once.sh: gnumake: not found
gnumake
dependency in functions.sh
is gross... It's not even called "gnumake" by default. ;)
make
would help even less here: a) it's only available there, on Windows, b) it's very limited, and c) there's likely an NMAKE also available there anyway...See e.g. the README case with 3 echo
lines, each with NL: expected even if there's no NL at the end of the EXPECT text!
(Originally opened as xparq/Args#35)
This is to help debugging individual cases without entering the test case dir manually just for executing a local exe there, or even just changing the last command too much...
In this mode the EXPECT phase will likely be off, so a warning should be printed.
Note: the params would be appended to every RUN/SH statement of the case.
E.g.:
$ run_cases "subset*"
ERROR: Test case "subset*" not found!
$ run_cases "subset*/*"
realpath: C:/sz/prj/Space_Test/test/subset subdir/subtree case 1.case subset subdir/subtree case with build.case: No such file or directory
ERROR: Test case "subset*/*" not found!
Woahh... OK, so it had nothing to do with EXPECT being empty, actually! :)
It was a problem of trying to RUN echo
, which must be run via SHELL
.
E.g. these are some of the vars exported for a build, showing what I mean:
export TEST_NAME
export TEST_DIR
export CASE
export TEST_CASE_DIR
...
And... While we are at it:
Well, there are two Git sh
s... Oh, wailt, three even, actually!... :-o And the "wrong" one is on the PATH here.
Test:
set "GITROOT=C:\Program Files\Git"
where sh.exe
rem -> C:\Program Files\Git\usr\bin\sh.exe
where find
rem -> C:\Windows\System32\find.exe
rem -> C:\Program Files\Git\usr\bin\find.exe
@echo.
@echo This will use Windows's find.exe:
sh.exe -c "which find; find . -name '*.*'"
@echo.
@echo This will use Git's find:
"%GITROOT%\bin\sh.exe" -c "which find; find . -name '*.*'"
@echo.
@echo WTF is this trying to do?
"%GITROOT%\git-bash.exe" -c "which find; find . -name '*.*'"
Also, critically: don't forget -c
(the error message is horribly misleading)!
>"...\sh.exe" find -- version
find: find: cannot execute binary file
So:
sh -c "find --version"
The test runner has been passing $MAKE
for a while now, so the actual build command could be selected accordingly.
Esp. this confusing one:
SH "echo Dir list:; ls -r"
SH echo command 3
EXPECT "Dir list:
Hi from the test case dir!
CASE
command 3
"
(And some other things below that.)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.