cegonse / cest Goto Github PK
View Code? Open in Web Editor NEWTDD framework inspired by Jest for C++
License: MIT License
TDD framework inspired by Jest for C++
License: MIT License
This is a suggestion for a possible enhancement. Running each test case as a child process could have a number of advantages:
stdout
, stderr
. This would improve the console output, as the output generated by the SUT could be captured and displayed orderly.SIGSEGV
, then the entire program is terminated and no more tests are run. By forking the process, the child process would die but the parent process (the test runner), could display information about the crash.This feature has portability concerns. If Cest is intended to be as portable as possible, then some mechanism should be in place to allow platform independent process spawning.
Tests may be skipped defining tests through the xit
keyword, as in:
it("will be executed", []() {
expect(true).toBe(true);
});
xit("will not be executed", []() {
expect(true).toBe(true);
});
SKIPPED
badge for xit
'd tests.I've found that when a test fails because of an exception, the information provided is quite limited. It would be nice to print the exception contents. For example, when using fakeit, a verification failure results in an exception than contains the information about the failed verification. If this information is not printed, it's quite hard to trace why the exception happened or even if it was related to fakeit at all.
This functionality is a bit advanced but quite useful because it allows reusing test cases and apply them to different input data. There is a very good example using the JUnitParamsRunner
at https://github.com/sandromancuso/roman-numerals-kata/blob/master/src/test/java/com/codurance/RomanNumeralConverterShould.java:
@Test
@Parameters({
"1, I",
"2, II",
"3, III",
"4, IV",
"5, V",
"7, VII",
"9, IX",
"10, X",
"17, XVII",
"30, XXX",
"38, XXXVIII",
"479, CDLXXIX",
"3999, MMMCMXCIX"
}) public void
convert_arabic_numbers_into_their_respective_roman_numeral(int arabic, String roman) {
assertThat(romanFor(arabic), is(roman));
}
I'm not sure about the right syntax, but maybe something like:
describe("Test with mocks", []() {
withParameters(Parameter(1), Parameter(2))
.it("runs the test with each parameter instance", [](Parameter value) {
});
});
In the above example, the test would run twice, one for each instance of the parameter value. Notice the starting dot
before the it
.
There is an implementation of parameterized test cases (with constraints) in googletest.
void cest::Assertion<std::__cxx11::basic_string >::toHaveLength(int64_t):
../tests/unit/framework/cest: In member function ‘void cest::Assertion<std::__cxx11::basic_string<char> >::toHaveLength(int64_t)’:
../tests/unit/framework/cest:333:37: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
if (actual.length() != length) {
std::__cxx11::string cest::sanitize(std::__cxx11::string):
../tests/unit/framework/cest: In function ‘std::__cxx11::string cest::sanitize(std::__cxx11::string)’:
../tests/unit/framework/cest:405:49: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
while ((start = text.find(from, start)) != std::string::npos) {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~
std::__cxx11::string cest::generateSuiteReport(cest::TestSuite)
../tests/unit/framework/cest:430:24: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
for (int i=0; i<test_suite.test_cases.size(); ++i) {
~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../tests/unit/framework/cest:439:38: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
suite_report << "}" << (i==test_suite.test_cases.size()-1? "" : ",");
I've tried the following and it seems to break the macro expansion.
it("finds the plugin by a plugin is stored with the same name", [&]() {
int var1, var2; // The comma is detected as the third argument of the 'it' macro
...
});
I think this could be solved by changing the macro definition with ...
and then using __VA_ARGS__
:
#define it(x, ...) cest::itFunction(__FILE__, __LINE__, x, __VA_ARGS__)
Execute all tests:
make
Execute tests matching substring:
make test_ma
make test_match
Test runners could be an interesting functionality. Consider the case where you have a mocking framework (like fakeit) that behaves in a specific way. In order to avoid having Cest depend on particular frameworks and tools, maybe some kind of "Runner" capability could be interesting. This capability could improve testing when using mocks and other third party extensions.
For example, this happens using the fakeit mocking framework:
describe("Test with mocks", []() {
it("throws a generic exception when mock verification fails", []() {
Mock<HttpClient> mock_http_client;
....
Verify(Method(mock_http_client, post).Using("hello")));
});
});
When the above verification fails, the output does not provide useful information:
FAIL test.cpp:29 it throws a generic exception when mock verification fails
❌ Assertion Failed: Unhandled exception in test case: std::exception
test.cpp:29
A possibility could be to include some kind of Runner
keyword, so that third party testing plugins could be used:
describe("Test with mocks", []() {
runWith(Fakeit);
it("throws a generic exception when mock verification fails", []() {
Mock<HttpClient> mock_http_client;
....
Verify(Method(mock_http_client, post).Using("hello")));
});
});
So in this case, when it fails, the fakeit exception could be catched and printed properly.
Instead of having to manually declare each test suite in spec.yaml
, let the framework automatically detect all files inside the spec
directory and its subdirectories which match:
Where extension can be:
For example with toMatch("string")
:
FAIL spec/test_assertions.cpp:76 it asserts regexs matches
❌ Assertion Failed: Expected pattern \w match$ did not match with To match a partial maatch
spec/test_assertions.cpp:76
It's quite difficult to take a quick look and see what's wrong, maybe if we split in two lines could be more readable:
FAIL spec/test_assertions.cpp:76 it asserts regexs matches
❌ Assertion Failed
Expected pattern: \w match$
Received string: To match a partial maatch
spec/test_assertions.cpp:76
A test case with an Unicode character in its name, such as:
it("😀", []() { });
Will fail to compile as the JUnit report generation fails:
Traceback (most recent call last):
File "framework/junit.py", line 94, in <module>
xml = generate_junit_xml(test_suites)
File "framework/junit.py", line 76, in generate_junit_xml
time=test_case['time']
UnicodeEncodeError: 'ascii' codec can't encode character u'\U0001f600' in position 46: ordinal not in range(128)
New keyword assertRaises
should let assert the raise of an exception. If the lambda expression throws the expected exception, assertion passes. Fails otherwise.
describe("test", []() {
it("raises an exception", []() {
assertRaises(std::invalid_argument, []() {
std::stoi("apple");
});
});
});
Tests may be focused defining tests through the fit
keyword, as in:
fit("only this test will be executed", []() {
expect(true).toBe(true);
});
it("will not be executed", []() {
expect(true).toBe(true);
});
SKIPPED
badge for tests except the focused test.JUnit reporting broke after dropping the support Python code. This feature should come back and be fully integrated in the C++ single header.
In the README, it can be read:
A test-driven development framework for C++ inspired by Jest and similar frameworks.
However, you can also use Cest to test software written in C. We may want to change the description to something like:
A test-driven development framework for C and C++ inspired by Jest and similar frameworks.
Later on, we may want to clarify that Cest is written in C++ or that tests will be written in C++.
Otherwise, C programmers won't even realize that Cest is a great tool to test and perform TDD on C projects. For clarity, we may want to translate the existing getting-started example to C.
When a SUT throws an exception that is not properly catched either by the test or by the SUT itself, the program immediately terminates with an ABORT. Maybe add a catch
clause when calling tests to capture such exceptions and continue normal execution.
This way, users can edit in real time their tests and experiment. The execution environment should be as sandboxed as possible (disable opening file handles, sockets, syscalls, limit CPU usage and execution time...).
Support of afterAll()
and beforeAll()
expressions which must be executed only once before and after all tests in the suite have been executed.
describe("some repository", []() {
Repository repository;
beforeAll([&]() {
repository.create();
});
afterAll([&]() {
repository.destroy();
);
beforeEach([&]() {
repository.insert(Item(123));
});
afterEach([&]() {
repository.remove(123);
});
it("contains an item with ID 123", [&]() {
expect(repository.contains(123)).toBeTruthy();
});
});
The following is proposed (tested with cpm-hub):
--- a/framework/cest
+++ b/framework/cest
@@ -251,5 +251,7 @@ int main(void)
delete test_case;
}
- return 0;
+ return std::any_of(test_cases.begin(), test_cases.end(), [](cest::TestCase *test_case) {
+ return test_case->test_failed;
+ });
}
Property based testing is an approach for expanding the input coverage of code. There are a number of frameworks out there for other languages supporting this feature. I think in the long term this could be an interesting feature to have, expanding the toolset available for testing.
An example of how this might work with Cest could be:
describe("a suite with properties", []() {
property("concatenating two strings always contains the first string", [](string a, string b) {
return (a + b).find(a) != string::npos;
}
});
The framework would then generate a number of instances for the test, generating values randomly according to some rules. This kind of testing has some similarities with parameterized test cases, albeit the principles are quite different.
I think most of the unit test frameworks stop test execution just after the first failure. Potentially, this could lead to some problems not being reported. For example, if there is a verifiation for not NULL, and that one fails, the following statements could lead to segfault, which would leave the user without error information:
int *value = NULL;
expect(value).toBeNotNull();
expect(*value).toBe(27);
Support negating any assertion, as in:
it("is not a snake", []() {
Carrot carrot;
Snake snake;
assert(carrot).not.toBe(snake);
});
Tests raising signals causing the termination of the program (SIGSEGV, SIGFPE, SIGILL) should be marked as test failures.
I have found in some particular cases that this could be a useful feature. For example:
it("throws an exception when requesting non stored value", []() {
Optional<int> optional;
try {
optional.value();
fail();
} catch(const char *msg) {
pass();
}
});
Add example tests and proper documentation on how to use parametrized tests and its syntax.
Assertion passes when expectation is matched:
it("matches regular expressions", []() {
expect("hello").toMatch("(hell)(.*)");
});
Support for Windows would include:
Currently, Cest has just one verbosity level. The problem is that, right now, as the number of tests grow, the output becomes unreadable. It would be great to make the output configurable, so that the verbosity level could be kept small (for example the typical "dot per test").
A good reference could be the output of pytest
.
README.md should contain at least:
Sometimes it's useful to run the debugger in order to find a problem with a test case. The problem right now is that, as the describe
statement is a macro, the C++ debugger seems not capable of doing steps
in the test cases themselves. It seems that the problem is related to GDB not supporting debugging of lambda functions. I think in order to fix this describe
should not expand to a lambda function, same as the it
statements.
Test suites and test cases inside each suite should be able to be launched randomly if required from a command line argument.
Add documentation for all available assertions, along with examples.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.