Giter Club home page Giter Club logo

cest's People

Contributors

cegonse avatar jamofer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

Forkers

jamofer

cest's Issues

README suggests that Cest cannot be used for C projects

In the README, it can be read:
A test-driven development framework for C++ inspired by Jest and similar frameworks.
However, you can also use Cest to test software written in C. We may want to change the description to something like:
A test-driven development framework for C and C++ inspired by Jest and similar frameworks.

Later on, we may want to clarify that Cest is written in C++ or that tests will be written in C++.

Otherwise, C programmers won't even realize that Cest is a great tool to test and perform TDD on C projects. For clarity, we may want to translate the existing getting-started example to C.

Support focusing tests through fit keyword

Tests may be focused defining tests through the fit keyword, as in:

fit("only this test will be executed", []() {
    expect(true).toBe(true);
});

it("will not be executed", []() {
    expect(true).toBe(true);
});
  • JUnit test report must indicate all tests in the suite except the focused test have been skipped.
  • Test output must show the SKIPPED badge for tests except the focused test.

Improve string expectation error message readability

For example with toMatch("string"):

 FAIL  spec/test_assertions.cpp:76 it asserts regexs matches
    ❌ Assertion Failed: Expected pattern \w match$ did not match with To match a partial maatch
                        spec/test_assertions.cpp:76

It's quite difficult to take a quick look and see what's wrong, maybe if we split in two lines could be more readable:

 FAIL  spec/test_assertions.cpp:76 it asserts regexs matches
    ❌ Assertion Failed
         Expected pattern: \w match$
         Received string:  To match a partial maatch
         spec/test_assertions.cpp:76

Catch exceptions raised by SUT

When a SUT throws an exception that is not properly catched either by the test or by the SUT itself, the program immediately terminates with an ABORT. Maybe add a catch clause when calling tests to capture such exceptions and continue normal execution.

Add support for custom test runners

Test runners could be an interesting functionality. Consider the case where you have a mocking framework (like fakeit) that behaves in a specific way. In order to avoid having Cest depend on particular frameworks and tools, maybe some kind of "Runner" capability could be interesting. This capability could improve testing when using mocks and other third party extensions.

For example, this happens using the fakeit mocking framework:

describe("Test with mocks", []() {
    it("throws a generic exception when mock verification fails", []() {
        Mock<HttpClient> mock_http_client;
       ....
        Verify(Method(mock_http_client, post).Using("hello")));
    });
});

When the above verification fails, the output does not provide useful information:

 FAIL  test.cpp:29 it throws a generic exception when mock verification fails
    ❌ Assertion Failed: Unhandled exception in test case: std::exception
                        test.cpp:29

A possibility could be to include some kind of Runner keyword, so that third party testing plugins could be used:

describe("Test with mocks", []() {
    runWith(Fakeit);
    it("throws a generic exception when mock verification fails", []() {
        Mock<HttpClient> mock_http_client;
       ....
        Verify(Method(mock_http_client, post).Using("hello")));
    });
});

So in this case, when it fails, the fakeit exception could be catched and printed properly.

Return with error code in main function when some test fails

The following is proposed (tested with cpm-hub):

--- a/framework/cest
+++ b/framework/cest
@@ -251,5 +251,7 @@ int main(void)
         delete test_case;
     }
 
-    return 0;
+    return std::any_of(test_cases.begin(), test_cases.end(), [](cest::TestCase *test_case) {
+        return test_case->test_failed;
+    });
 }

Print exception contents when test fails due to exception

I've found that when a test fails because of an exception, the information provided is quite limited. It would be nice to print the exception contents. For example, when using fakeit, a verification failure results in an exception than contains the information about the failed verification. If this information is not printed, it's quite hard to trace why the exception happened or even if it was related to fakeit at all.

Allow configuration of Cest verbosity

Currently, Cest has just one verbosity level. The problem is that, right now, as the number of tests grow, the output becomes unreadable. It would be great to make the output configurable, so that the verbosity level could be kept small (for example the typical "dot per test").

A good reference could be the output of pytest.

Support after and before all test-suite level setup and teardown

Support of afterAll() and beforeAll() expressions which must be executed only once before and after all tests in the suite have been executed.

describe("some repository", []() {
    Repository repository;

    beforeAll([&]() {
        repository.create();
    });

    afterAll([&]() {
        repository.destroy();
    );

    beforeEach([&]() {
        repository.insert(Item(123));
    });

    afterEach([&]() {
        repository.remove(123);
    });

    it("contains an item with ID 123", [&]() {
        expect(repository.contains(123)).toBeTruthy();
    });
});

Test cases could be run as child processes

This is a suggestion for a possible enhancement. Running each test case as a child process could have a number of advantages:

  • Capturing stdout, stderr. This would improve the console output, as the output generated by the SUT could be captured and displayed orderly.
  • Capturing terminating signals. If a SUT raises for example a SIGSEGV, then the entire program is terminated and no more tests are run. By forking the process, the child process would die but the parent process (the test runner), could display information about the crash.

This feature has portability concerns. If Cest is intended to be as portable as possible, then some mechanism should be in place to allow platform independent process spawning.

Support property based test cases

Property based testing is an approach for expanding the input coverage of code. There are a number of frameworks out there for other languages supporting this feature. I think in the long term this could be an interesting feature to have, expanding the toolset available for testing.

An example of how this might work with Cest could be:

describe("a suite with properties", []() {
    property("concatenating two strings always contains the first string", [](string a, string b) {
        return (a + b).find(a) != string::npos;
    }
});

The framework would then generate a number of instances for the test, generating values randomly according to some rules. This kind of testing has some similarities with parameterized test cases, albeit the principles are quite different.

Automatically detect all tests under spec directory

Instead of having to manually declare each test suite in spec.yaml, let the framework automatically detect all files inside the spec directory and its subdirectories which match:

  • test_*.extension
  • *_spec.extension
  • *_feature.extension

Where extension can be:

  • .cpp
  • .cc
  • .cxx

Using a comma inside a test case will break the describe macro

I've tried the following and it seems to break the macro expansion.

it("finds the plugin by a plugin is stored with the same name", [&]() {
        int var1, var2;   // The comma is detected as the third argument of the 'it' macro
...
});

I think this could be solved by changing the macro definition with ... and then using __VA_ARGS__:

#define it(x, ...)                cest::itFunction(__FILE__, __LINE__, x, __VA_ARGS__)

Expects continue to be executed even if the first one fails

I think most of the unit test frameworks stop test execution just after the first failure. Potentially, this could lead to some problems not being reported. For example, if there is a verifiation for not NULL, and that one fails, the following statements could lead to segfault, which would leave the user without error information:

int *value = NULL;

expect(value).toBeNotNull();
expect(*value).toBe(27);

Support asserting exceptions

New keyword assertRaises should let assert the raise of an exception. If the lambda expression throws the expected exception, assertion passes. Fails otherwise.

describe("test", []() {
  it("raises an exception", []() {
    assertRaises(std::invalid_argument, []() {
      std::stoi("apple");
    });
  });
});

Support parameterized test cases

This functionality is a bit advanced but quite useful because it allows reusing test cases and apply them to different input data. There is a very good example using the JUnitParamsRunner at https://github.com/sandromancuso/roman-numerals-kata/blob/master/src/test/java/com/codurance/RomanNumeralConverterShould.java:

    @Test
    @Parameters({
            "1, I",
            "2, II",
            "3, III",
            "4, IV",
            "5, V",
            "7, VII",
            "9, IX",
            "10, X",
            "17, XVII",
            "30, XXX",
            "38, XXXVIII",

            "479, CDLXXIX",
            "3999, MMMCMXCIX"
    }) public void
    convert_arabic_numbers_into_their_respective_roman_numeral(int arabic, String roman) {
        assertThat(romanFor(arabic), is(roman));
    }

I'm not sure about the right syntax, but maybe something like:

describe("Test with mocks", []() {
    withParameters(Parameter(1), Parameter(2))
    .it("runs the test with each parameter instance", [](Parameter value) {
    });
});

In the above example, the test would run twice, one for each instance of the parameter value. Notice the starting dot before the it.

There is an implementation of parameterized test cases (with constraints) in googletest.

Fix JUnit report support

JUnit reporting broke after dropping the support Python code. This feature should come back and be fully integrated in the C++ single header.

Unicode strings are not correctly handled by JUnit results parser

A test case with an Unicode character in its name, such as:

it("😀", []() { });

Will fail to compile as the JUnit report generation fails:

Traceback (most recent call last):
  File "framework/junit.py", line 94, in <module>
    xml = generate_junit_xml(test_suites)
  File "framework/junit.py", line 76, in generate_junit_xml
    time=test_case['time']
UnicodeEncodeError: 'ascii' codec can't encode character u'\U0001f600' in position 46: ordinal not in range(128)

Support debugging of test cases

Sometimes it's useful to run the debugger in order to find a problem with a test case. The problem right now is that, as the describe statement is a macro, the C++ debugger seems not capable of doing steps in the test cases themselves. It seems that the problem is related to GDB not supporting debugging of lambda functions. I think in order to fix this describe should not expand to a lambda function, same as the it statements.

Support skipping tests through xit keyword

Tests may be skipped defining tests through the xit keyword, as in:

it("will be executed", []() {
    expect(true).toBe(true);
});

xit("will not be executed", []() {
    expect(true).toBe(true);
});
  • JUnit test report must indicate the test has been skipped.
  • Test output must show the SKIPPED badge for xit'd tests.

Review compilation warnings with -Wsign-compare

void cest::Assertion<std::__cxx11::basic_string >::toHaveLength(int64_t):

../tests/unit/framework/cest: In member function ‘void cest::Assertion<std::__cxx11::basic_string<char> >::toHaveLength(int64_t)’:
../tests/unit/framework/cest:333:37: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
                 if (actual.length() != length) {

std::__cxx11::string cest::sanitize(std::__cxx11::string):

../tests/unit/framework/cest: In function ‘std::__cxx11::string cest::sanitize(std::__cxx11::string)’:
../tests/unit/framework/cest:405:49: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
         while ((start = text.find(from, start)) != std::string::npos) {
                ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~

std::__cxx11::string cest::generateSuiteReport(cest::TestSuite)

../tests/unit/framework/cest:430:24: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
         for (int i=0; i<test_suite.test_cases.size(); ++i) {
                       ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../tests/unit/framework/cest:439:38: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
             suite_report << "}" << (i==test_suite.test_cases.size()-1? "" : ",");

Implement functions to inconditionally pass or fail tests

I have found in some particular cases that this could be a useful feature. For example:

    it("throws an exception when requesting non stored value", []() {
        Optional<int> optional;

        try {
            optional.value();
            fail();
        } catch(const char *msg) {
            pass();
        }
    });

Expand README.md with basic information

README.md should contain at least:

  • A brief introduction to the framework
  • How to install and run tests
  • Showcase of features by example (assertions, test definitions).
  • Contribution information
  • Licensing information.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.