Giter Club home page Giter Club logo

polish's Introduction

Build Status Crates Package Status Codacy Badge License: MIT

Polish

Polish is Test-Driven Development done right

asciicast

Getting Started

Installing the Package

The crates.io package is kept up-to-date with all the major changes which means you can use it by simply including the following in your Cargo.toml under your dependencies section:

polish = "*"

Replace * with the version number shown in the crates.io badge above

But if you'd like to use nightly (most recent) releases, you can include the GitHub package repo instead:

polish = { git = "https://github.com/alkass/polish", branch = "master" }

Writing Test Cases

single Test Cases

The simplest test case can take the following form:

extern crate polish;

use polish::test_case::{TestRunner, TestCaseStatus, TestCase};
use polish::logger::Logger;

fn my_test_case(logger: &mut Logger) -> TestCaseStatus {
  // TODO: Your test case code goes here
  TestCaseStatus::PASSED // Other valid statuses are (FAILED, SKIPPED, and UNKNOWN)
}

fn main() {
  let test_case = TestCase::new("Test Case Title", "Test Case Criteria", Box::new(my_test_case));
  TestRunner::new().run_test(test_case);
}

This produces the following:

The example listed above is available here

You can also pass a Rust closure instead of a function pointer as so:

extern crate polish;

use polish::test_case::{TestRunner, TestCaseStatus, TestCase};
use polish::logger::Logger;

fn main() {
  let test_case = TestCase::new("Test Case Title", "Test Case Criteria", Box::new(|logger: &mut Logger| -> TestCaseStatus {
    // TODO: Your test case code goes here
    TestCaseStatus::PASSED
  }));
  TestRunner::new().run_test(test_case);
}

The example listed above is available here

Multiple Test Cases

You can run multiple test cases as follows:

extern crate polish;

use polish::test_case::{TestRunner, TestCaseStatus, TestCase};
use polish::logger::Logger;

fn main() {
  let mut runner = TestRunner::new();
  runner.run_test(TestCase::new("1st Test Case Title", "Test Case Criteria", Box::new(|logger: &mut Logger| -> TestCaseStatus {
    // TODO: Your test case code goes here
    TestCaseStatus::PASSED
  })));
  runner.run_test(TestCase::new("2nd Test Case Title", "Test Case Criteria", Box::new(|logger: &mut Logger| -> TestCaseStatus {
    // TODO: Your test case code goes here
    TestCaseStatus::PASSED
  })));
  runner.run_test(TestCase::new("3rd Test Case Title", "Test Case Criteria", Box::new(|logger: &mut Logger| -> TestCaseStatus {
    // TODO: Your test case code goes here
    TestCaseStatus::PASSED
  })));
}

But a more convenient way would be to pass a Vector of your test cases to run_tests as so:

extern crate polish;

use polish::test_case::{TestRunner, TestCaseStatus, TestCase};
use polish::logger::Logger;

fn main() {
    let my_tests = vec![
      TestCase::new("1st Test Case Title", "1st Test Case Criteria", Box::new(|logger: &mut Logger| -> TestCaseStatus {
        // TODO: Your test case goes here
        TestCaseStatus::PASSED
      })),
      TestCase::new("2nd Test Case Title", "2nd Test Case Criteria", Box::new(|logger: &mut Logger| -> TestCaseStatus {
        // TODO: Your test case goes here
        TestCaseStatus::UNKNOWN
      })),
      TestCase::new("3rd Test Case Title", "3rd Test Case Criteria", Box::new(|logger: &mut Logger| -> TestCaseStatus {
        // TODO: Your test case goes here
        TestCaseStatus::FAILED
      }))];
    TestRunner::new().run_tests(my_tests);
}

This produces the following:

The example listed above is available here

Embedded Test Cases

You may choose to have a set of test cases as part of an object to test that object itself. For that, a clean way of writing your test cases would be to implement the Testable trait. Following is an example:

extern crate polish;

use polish::test_case::{TestRunner, TestCaseStatus, TestCase, Testable};
use polish::logger::Logger;

struct MyTestCase;
impl Testable for MyTestCase {
  fn tests(self) -> Vec<TestCase> {
    vec![
      TestCase::new("Some Title #1", "Testing Criteria", Box::new(|logger: &mut Logger| -> TestCaseStatus {
        // TODO: Your test case goes here
        TestCaseStatus::PASSED
      })),
      TestCase::new("Some Title #2", "Testing Criteria", Box::new(|logger: &mut Logger| -> TestCaseStatus {
      // TODO: Your test case goes here
      TestCaseStatus::SKIPPED
    }))]
  }
}

fn main() {
  TestRunner::new().run_tests_from_class(MyTestCase {});
}

This produces the following:

The example listed above is available here

Attributes

Attributes allow you to change the behaviour of how your test cases are run. For instance, by default, your TestRunner instance will run all your test cases regardless of whether any have failed. If you, however, want this behaviour changed, you will need to specifically tell your TestRunner instance to stop the process at the first failure.

THIS FEATURE IS STILL WORK-IN-PROGRESS. THIS DOCUMENT WILL BE UPDATED WITH TECHNICAL DETAILS ONCE THE FEATURE IS COMPLETE.

Logging

The logger object that's passed to each test case offers 4 logging functions (pass, fail, warn, and info). Each of these functions take a message argument of type String which allows you to use the format! macro to format your logs, e.g.:

logger.info(format!("{} + {} = {}", 1, 2, 1 + 2));
logger.pass(format!("{id}: {message}", id = "alkass", message = "this is a message"));
logger.warn(format!("about to fail"));
logger.fail(format!("failed with err_code: {code}", code = -1));

This produces the following:

The example listed above is available here

If your test case return status is UNKNOWN and you've printed at least one fail log from within the test case function, your test case result will be marked as FAILED. Otherwise, your test case will be marked as PASSED.

Author

Fadi Hanna Al-Kass

polish's People

Contributors

alkass avatar alshakero avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

polish's Issues

Running multiple THEN's per test case

I have a test suite set up as follows:

#[test]
fn tests() {
    TestRunner::new()
        .set_module_path(module_path!())
        .set_attributes(TestRunnerAttributes.disable_final_stats | TestRunnerAttributes.minimize_output)
        .set_time_unit(TestRunnerTimeUnits.microseconds)
        .run_tests(vec![
            TestCase::new("App::run()", "yields arch width", Box::new(|_logger: &mut Logger| -> TestCaseStatus {
                // GIVEN an app
                let mock_width = 42;
                let expected_result = Ok::<String, Error>(format!("Hello, {}-bit world!", mock_width));
                let mock = MockArch::new(mock_width);
                let sut = App::new(&mock);

                // WHEN the app is run
                let result = sut.run();

                // THEN the result should contain the expected architecture width
                match result == expected_result {
                    true => TestCaseStatus::PASSED,
                    false => TestCaseStatus::FAILED,
                }
            })),
            TestCase::new("App::run()", "calls Info::width() once", Box::new(|_logger: &mut Logger| -> TestCaseStatus {
                // GIVEN an app
                let mock_width = 42;
                let mock = MockArch::new(mock_width);
                let sut = App::new(&mock);

                // WHEN the app is run
                let _ = sut.run();

                // THEN the app should have called Info::width() exactly once
                match mock.width_times_called.get() == 1 {
                    true => TestCaseStatus::PASSED,
                    false => TestCaseStatus::FAILED,
                }
            })),
        ]);
}

Instead of repeating nearly the entire test case, I would prefer to simply add an additional THEN clause to the end of the first test case, like so:

fn tests() {
    TestRunner::new()
        .set_module_path(module_path!())
        .set_attributes(TestRunnerAttributes.disable_final_stats | TestRunnerAttributes.minimize_output)
        .set_time_unit(TestRunnerTimeUnits.microseconds)
        .run_tests(vec![
            TestCase::new("App::run()", "yields arch width", Box::new(|_logger: &mut Logger| -> TestCaseStatus {
                // GIVEN an app
                let mock_width = 42;
                let expected_result = Ok::<String, Error>(format!("Hello, {}-bit world!", mock_width));
                let mock = MockArch::new(mock_width);
                let sut = App::new(&mock);

                // WHEN the app is run
                let result = sut.run();

                // THEN the result should contain the expected architecture width
                match result == expected_result {
                    true => TestCaseStatus::PASSED,
                    false => TestCaseStatus::FAILED,
                }

                // AND_THEN the app should have called Info::width() exactly once
                match mock.width_times_called.get() == 1 {
                    true => TestCaseStatus::PASSED,
                    false => TestCaseStatus::FAILED,
                }
            })),
        ]);
}

Obviously, this won't compile given the current signature. The multi-test test case becomes more important as the complexity of the tests increase, by keeping the amount of duplicated code to a minimum.

Your version is at 0.9, so I thought I would bring this up before you stabilize your API at 1.0, just in case it ends up being a breaking change.

Trying not to break the API, here is one idea that might work:

fn tests() {
    TestRunner::new()
        .set_module_path(module_path!())
        .set_attributes(TestRunnerAttributes.disable_final_stats | TestRunnerAttributes.minimize_output)
        .set_time_unit(TestRunnerTimeUnits.microseconds)
        .run_tests(vec![
            ResultTestCase::new("App::run()", "yields arch width", Box::new(|_logger: &mut Logger| -> Result<(), TestCaseStatus::FAILED>  {
                // GIVEN an app
                let mock_width = 42;
                let expected_result = Ok::<String, Error>(format!("Hello, {}-bit world!", mock_width));
                let mock = MockArch::new(mock_width);
                let sut = App::new(&mock);

                // WHEN the app is run
                let result = sut.run();

                // THEN the result should contain the expected architecture width
                test_case_assert_eq(match result, expected_result)?

                // AND_THEN the app should have called Info::width() exactly once
                test_case_assert_eq(mock.width_times_called.get(), 1)?
            })),
        ]);
}

The benefits are that the test fails at the exact line where the test fails. This means that a developer can read the error message and will know the precise issue without having to enter debug.
Contrasted with:

...
                let test_case_status = TestCaseStatus::FAILED;
                // THEN the result should contain the expected architecture width
                test_case_status = match result == expected_result {
                    true => TestCaseStatus::PASSED,
                    false => TestCaseStatus::FAILED,
                }
                // AND_THEN the app should have called Info::width() exactly once
                test_case_status = match mock.width_times_called.get() == 1 {
                    true => TestCaseStatus::PASSED,
                    false => TestCaseStatus::FAILED,
                }
                // AND_THEN ...
                ...

                // AND_THEN ...
                ...

                test_case_status
            })),

where a) state is required to be maintained by the developer, and b) in the event of a failure, the specific sub-test which failed is lost, necessitating c) a debug session.

Anyway, this is not urgent or anything--this is just a thought I wanted to share with you. Please let me know if you have thoughts on other ways to achieve this.

Reducing test output verbosity

The output when running tests, while nicely formatted, is very verbose. Is there a way to reduce the output to just the name of each test run and its status?

For example, I have two tests defined as follows:

use super::*;
use polish::test_case::{TestRunner, TestCaseStatus, TestCase};
use polish::logger::Logger;

#[test]
fn tests() {
    TestRunner::new().run_tests(vec![
        TestCase::new("BrakeAmt::new()",
                      "calling with no input succeeds",
                      Box::new(|_logger: &mut Logger| -> TestCaseStatus {

            // GIVEN the method under test
            let expected_result = FrictBrakeAmt(Unorm::default());
            let sut = FrictBrakeAmt::new;

            // WHEN a BrakeAmt is created
            let result = sut();

            // THEN the request should succeed, containing the expected value
            match result == expected_result {
                true  => TestCaseStatus::PASSED,
                false => TestCaseStatus::FAILED,
            }
        })),

        TestCase::new("BrakeAmt::from_unorm()",
                      "calling with a unorm value succeeds",
                      Box::new(|_logger: &mut Logger| -> TestCaseStatus {

            // GIVEN the method under test
            let test_value = 0.42;
            #[allow(result_unwrap_used)]
            let unorm = Unorm::from_f64(test_value).unwrap();
            let expected_result = FrictBrakeAmt(unorm);
            let sut = FrictBrakeAmt::from_unorm;

            // WHEN a BrakeAmt is created
            let result = sut(unorm);

            // THEN the request should succeed, containing the expected value
            match result == expected_result {
              true  => TestCaseStatus::PASSED,
              false => TestCaseStatus::FAILED,
            }
        })),
    ]);
}

They yield the following output:

running 1 test
Starting BrakeAmt::new() at 14:39:10 on 2017-12-21
Ended BrakeAmt::new() at 14:39:10 on 2017-12-21
calling with no input succeeds ... ✅
0 PASS  0 FAIL  0 WARN  0 INFO
Starting BrakeAmt::from_unorm() at 14:39:10 on 2017-12-21
Ended BrakeAmt::from_unorm() at 14:39:10 on 2017-12-21
calling with a unorm value succeeds ... ✅
0 PASS  0 FAIL  0 WARN  0 INFO

BrakeAmt::new() (calling with no input succeeds) ... 1ns
BrakeAmt::from_unorm() (calling with a unorm value succeeds) ... 1ns

Ran 2 test(s) in 2ns
2 Passed  0 Failed  0 Skipped
test types::unit_tests::tests ... ok

test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out

The verbosity obscures the test results. Ideally, I'd like to see just a simple namespaced list with a global summary at the bottom (across all modules and workspace crates):

✅ chal::types::unit_tests::tests::BrakeAmt::new() (calling with no input succeeds) ... 1ns
✅ chal::types::unit_tests::tests::BrakeAmt::from_unorm() (calling with a unorm value succeeds) ... 1ns

Ran 2 test(s) in 2ns...  ok
2 Passed  0 Failed  0 Skipped

Possible?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.