Giter Club home page Giter Club logo

teenytest's Introduction

teenytest

Build Status

A test runner so tiny, you have to squint to see it!

If you put test scripts in test/lib, then teenytest's CLI will run them with zero public-API and zero configuration. That's pretty teeny, by the sound of it!

Usage

npm i --save-dev teenytest

Using the CLI

teenytest includes a CLI, which can be run ad hoc with:

$(npm bin)/teenytest

By default, the CLI will assume your tests are in "test/lib/**/*.js" and it will search for a test helper in "test/helper.js". You can specify either or both of these by providing arguments, as well:

$(npm bin)/teenytest "test/lib/**/*.js" --helper "test/helper.js"

As an npm script

We prefer including our script in the scripts section of our package.json:

"scripts": {
  "test": "teenytest test/lib/**/*.test.js --helper test/helper.js"
}

With that configuration above, you could run all your tests with:

npm test

If you want to run a single test, you can just tack an additional path or glob at the end without looking at how teenytest is configured in the package.json:

npm test path/to/my.test.js

The above will ignore the glob embedded in the npm script and only run path/to/my.test.js.

Writing tests

Test styles

Our tests are just Node.js modules. Rather than specify your tests via a fancy testing API, whatever your test modules sets onto module.exports will determine how teenytest will run the test. Modules can export either a single test function or an object of (potentially nested) test functions.

Read on for examples.

Single function tests

If you export a function, that function will be run as a single test. Note that you'll get better test output if you name the function.

var assert = require('assert')

module.exports = function blueIsRed(){
  assert.equal('blue', 'red')
}

The above test will fail (since 'blue' doesn't equal 'red') with output like:

TAP version 13
1..1
not ok 1 - "blueIsRed" - test #1 in `test/lib/single-function.js`
  ---
  AssertionError: 'blue' == 'red'
    at blueIsRed (teenytest/example/simple-node/test/lib/single-function.js:4:10)
    at teenytest/index.js:47:9
    ...
    at Module._compile (module.js:434:26)
  ...

Exporting an object of test functions

If you export an object, you can include as many tests as you like. You can also implement any or all of the four supported test hooks: beforeEach, afterEach, beforeAll, and afterAll.

A file with two tests and all the hooks implemented could look like:

var assert = require('assert')

module.exports = {
  beforeAll: function() { console.log("I'll run once before both tests") },
  beforeEach: function() { console.log("I'll run twice - once before each test") },

  adds: function() { assert.equal(1 + 1, 2) },
  subtracts: function() { assert.equal(4 - 2, 2) },

  afterEach: function() { console.log("I'll run twice - once after each test") },
  afterAll: function() { console.log("I'll run once after both tests") }
}

This will output what you might expect (be warned: using console.log in your actual tests will make teenytest's output unparseable by TAP reporters):

TAP version 13
1..2
I'll run once before both tests
I'll run twice - once before each test
ok 1 - "adds" - test #1 in `test/lib/exporting-an-object.js`
I'll run twice - once after each test
I'll run twice - once before each test
ok 2 - "subtracts" - test #2 in `test/lib/exporting-an-object.js`
I'll run twice - once after each test
I'll run once after both tests

Nested tests

Nested tests are also supported, in which any object can contain any combination of hooks, test functions, and additional sub-test objects. This makes nested teenytest modules very similar to what's possible with "BDD"-like test libraries (in what are traditionally referred to as "example groups" by RSpec, Jasmine, and Mocha parlance).

A common rationale for writing nested tests is to define one nested set of tests for each public method on a subject, for better symmetry between the test and the subject.

Let's see an example. Given this test in test/lib/dog-test.js:

var assert = require('assert')
var Dog = require('../../lib/dog')

module.exports = {
  beforeEach: function () {
    this.subject = new Dog('Sam')
  },
  bark: {
    once: function () {
      assert.deepEqual(this.subject.bark(1), ['Woof #0'])
    },
    twice: function () {
      assert.deepEqual(this.subject.bark(2), ['Woof #0', 'Woof #1'])
    }
  },
  tag: {
    frontSaysName: function () {
      assert.equal(this.subject.tag('front'), 'Hi, I am Sam')
    },
    backSaysAddress: function () {
      assert.equal(this.subject.tag('back'), 'And here is my address')
    }
  }
}

You'll get this output upon running $ teenytest test/lib/dog-test.js:

TAP version 13
1..4
ok 1 - "bark once" - test #1 in `example/simple-node/test/lib/dog-test.js`
ok 2 - "bark twice" - test #2 in `example/simple-node/test/lib/dog-test.js`
ok 3 - "tag frontSaysName" - test #3 in `example/simple-node/test/lib/dog-test.js`
ok 4 - "tag backSaysAddress" - test #4 in `example/simple-node/test/lib/dog-test.js`

Assertions

One thing you'll notice right away is that teenytest does not ship with its own assertion library. In teenytest, any test that throws an error will trigger a test failure. To keep things simple, the examples in teenytest use Node's built-in assert module, but keep in mind that it isn't intended for public consumption.

If you like the simplicity of the built-in assert, you might want to use its port core-assert. chai is also a very popular choice.

Writing asynchronous tests

With callbacks

Any test hook or test function can also support asynchronous behavior via a callback function. To indicate that a function is asynchronous, add a callback argument to the test method.

For instance, a synchronous test could:

module.exports = function() {
  require('assert').equal(1+1, 2)
}

But an asynchronous test could specify a done argument and tell teenytest that the test (or hook) is complete by invoking done().

module.exports = function(done) {
  process.nextTick(function(){
    require('assert').equal(1+1, 2)
    done()
  })
}

A test failure can be triggered by either throwing an uncaught exception (which teenytest will be listening for during each asynchronous step) or by passing an Error as the first argument to done.

With promises

If you would prefer to return a promise to manage asynchronous tests, take a look at the teenytest-promise plugin.

Test Helper & Global Hooks

In addition to defining before & after hooks on a per test file basis, teenytest also supports a global test helper, which it will search for by default in test/helper.js, but can be configured with the helperPath configuration option in the API.

An example helper might look like this:

// make global things common across each test to save on per-test setup
global.assert = require('assert')

module.exports = {
  beforeAll: function(){},
  beforeEach: function(){},
  afterEach: function(){},
  afterAll: function(){}
}

In this case, the beforeAll/afterAll hooks will run only at the beginning and the end of the entire suite (whereas the same hooks exported from a single test file will run before or after all the tests in that same file). The beforeEach/afterEach hooks, meanwhile will run before and after each test in the entire suite.

Advanced CLI Usage

Configuration

You can configure teenytest via CLI arguments or as properties of a teenytest object in your package.json. A full example follows:

$(npm bin)/teenytest \
  --helper test/support/helper.js \
  --timeout 3000 \
  --configurator config/teenytest.js \
  --plugin test/support/benchmark-plugin.js \
  --plugin teenytest-promise \\
  "lib/**/*.test.js"

The above is equivalent to the following package.json entry:

"teenytest": {
  "testLocator": "lib/**/*.test.js",
  "helper": "test/support/helper.js",
  "asyncTimeout": 3000,
  "configurator": "config/teenytest.js",
  "plugins": [
    "test/support/benchmark-plugin.js",
    "teenytest-promise"
  ]
}

These options are available:

  • testLocator - [Default: "test/lib/**/*.js"] - one or more globs which teenytest should use to search for tests. May be a string or an array of strings
  • name - [Default: []] - one or more global name filters to be applied to all files matched by testLocator
  • helper - [Default: "test/helper.js"] - the location of your global test helper file
  • asyncTimeout - [Default: 5000] - the maximum timeout (in milliseconds) for any given test in your suite
  • configurator - [Default: undefined] - a require-able path which exports a function that with parameters (teenytest, cb). Configurator files may be used to run custom code just before the test runner executes the test suite, register or unregister plugins with functions provided by teenytest.plugins, and must invoke the provided callback
  • plugins - [Default: []] - an array of require-able paths which export either teenytest plugin objects or no-arg functions that return plugin objects

Specifying which test files to run

If you'd like to run tests from specific files, you can do that by passing testLocator as an unnamed option on the command line.

teenytest test/foo-test.js

Multiple path/glob options can be passed for testLocator. The following will run all tests in test/specific-foo-test.js as well as any test file matching the glob pattern test/*-bar-test.js.

teenytest test/single-foo-test.js test/*-bar-test.js

Filtering which tests are run

If you'd like to just run one test from a file, you can do that, too!

Locating by name

If you have a test in test/foo-test.js and it exports an object with functions bar and baz, you could tell teenytest to just run baz with:

teenytest test/foo-test.js#baz

The # character will split the glob on the left from the name on the right.

This can even be used across multiple tests in a wildcard glob, allowing you to slice a CI build based on a particular concern, for instance, you could run all audit log tests across your project's modules so long as they name the test the same thing (e.g. teenytest test/**/*.js#audit) to run all of them at once, without necessarily having to split that concern into its own set of files or directories.

Locating by line number

Suppose you have a test in test/bar-test.js and you want to run the test on line 14 (whether that's the line number where the function is declared, or just some line inside the exported test function). You can run just that test with:

teenytest test/bar-test.js:14

Locating with multiple names or line numbers

Each testLocator option can include one name or line number filter suffix. The same glob may be passed multiple times with different suffixes to locate tests matching more than one filter:

teenytest \
  test/foo-test.js#red \
  test/foo-test.js#blue \
  test/bar-test.js:14 \
  test/bar-test.js:28

The above will run tests named red and blue in the file test/foo-test.js and tests on lines 14 and 28 in the file test/bar-test.js.

Locating with the --name option

The --name option may be used to specify a global name filter that will be applied to every testLocator in addition to any filter suffixes provided. The following two commands would result in identical test runs:

teenytest \
  --name=red
  test/foo.test.js
  test/bar.test.js#blue
  test/baz.test.js:14
teenytest \
  test/foo.test.js
  test/foo.test.js#red
  test/bar.test.js#blue
  test/bar.test.js#red
  test/baz.test.js:14
  test/baz.test.js#red

--name may be used multiple times to specify more than one global name filter:

teenytest --name=red --name=blue test/foo.test.js

Setting a timeout

By default, teenytest will allow 5 seconds for tests with asynchronous hooks or test functions to run before failing the test with a timeout error. To change this setting, set the --timeout flag in milliseconds:

teenytest --timeout 10000

The above will set the timeout to 10 seconds.

Reporting

teenytest's output is TAP13-compliant, so its output can be reported on and aggregated with numerous supported continuous integration & reporting tools.

Coverage with istanbul

If you're looking for code coverage, we recommend using istanbul's CLI. To get started, install istanbul locally:

npm i --save-dev istanbul

Suppose you're currently running your teeny tests with:

$(npm bin)/teenytest "lib/**/*.test.js" --helper "test/unit-helper.js"

You can now generate a coverage report for the same test run with:

$(npm bin)/istanbul cover node_modules/teenytest/bin/teenytest -- "lib/**/*.test.js" --helper "test/unit-helper.js"

Note the use of -- before the arguments intended for teenytest itself, which istanbul will forward along.

You could also set up both as npm scripts so you could run either npm test and npm run test:cover by specifying them in your package.json:

"scripts": {
  "test": "teenytest \"lib/**/*.test.js\" --helper test/unit-helper.js",
  "test:cover": "istanbul cover teenytest -- \"lib/**/*.test.js\" --helper test/unit-helper.js"
}

Other good stuff

Building teenytest plugins

Most of the runtime behavior in teenytest is implemented as plugins that wrap the functions, tests, and suites defined by the user. You can register your own plugin like this:

teenytest.plugins.register({
  name: 'pending',
  interceptors: {
    test: function (runTest, metadata, cb) {
      runTest(function pendingTest(er, results) {
        if (_.startsWith(metadata.name, 'pending') && results.passing) {
          metadata.triggerFailure(new Error('Pending should not pass!'))
        }
        cb(er)
      })
    }
  }
})

The above plugin will fail any tests whose name starts with "pending" but that actually passed. There are several types of plugins, but all of them follow the same theme of wrapping the users' own defined functions and (often nested) suites.

There are two things to keep in mind when designing a plugin: wrapper scopes and lifecycle events.

Plugin wrapper scopes

There are three scopes of specificity each plugin can attach to: userFunction, test, and suite.

userFunction wrappers

A userFunction could be a hook like beforeAll or afterEach or an actual test function. If your plugin should augment or observe the actual behavior of the functions a user defines in their test listings, then you want to define a userFunction plugin.

For example, a plugin below might be a starting point for adding promise support to teenytest:

module.exports = {
  name: 'teenytest-promise',
  translators: {
    userFunction: function (runUserFunction, metadata, cb) {
      runUserFunction(function (er, result) {
        if (typeof result.value === 'object' &&
            typeof result.value['then'] === 'function') {
          result.value.then(
            function promiseFulfilled (value) {
              cb(er, value)
            },
            function promiseRejected (reason) {
              cb(reason, null)
            }
          )
        } else {
          cb(er)
        }
      })
    }
  }
}

(The above is also the actual source listing of v1.0.0 of the teenytest-promise module.)

test wrappers

Not to be confused with a test function, a test wrapper scope encompasses a test function plus all its hooks. If your plugin is concerned with each test's results, you probably want a test-scoped wrapper.

An example is teenytest's built-in timeout plugin, which guards against tests that take too long:

var timeoutInMs = 1000
teenytest.plugins.register({
  name: 'teenytest-timeout',
  supervisors: {
    test: function (runTest, metadata, cb) {
      var timedOut = false
      var timer = setTimeout(function outtaTime () {
        timedOut = true
        cb(new Error('Test timed out! (timeout: ' + timeoutInMs + 'ms)'))
      }, timeoutInMs)

      runTest(function timerWrappedCallback (er) {
        if (!timedOut) {
          clearTimeout(timer)
          cb(er)
        }
      })
    }
  }
})
suite wrappers

Finally, plugins can also wrap the execution of entire suites of tests using the suite scope. This scope is most often necessary when your plugin wants to comprehend the overall test suite as a tree, and wants to visit each of the suites as nodes on the tree.

This is certainly the least-used scoping, and is most likely to be needed by plugins that gather test results or report on them.

Plugin lifecycle events

The example above defines its wrapper under interceptors, because it needs to run after results have been initially determined but before the results have been logged to the console. Below are the available events to hook into:

translators

Wrapper functions defined under a plugin's translators property will run first, which should enable the author to augment the behavior of the test itself. For instance, one of the first plugins teenytest runs converts all of the user's functions to a consistent async callback API, regardless of whether the user function was asynchronous or not.

supervisors

Wrapper functions that desire to short-circuit or affect the failure/passing status of a test are implemented under a plugin's supervisors key. Two examples built into teenytest of this are a plugin that enforces a timeout for each test and another that catches uncaught exceptions (i.e. if the user throws error instead of passing it to the callback function).

analyzers

Wrapper functions that compute results are defined under the analyzers key of a plugin. Teenytest ships with a built-in results plugin & store that is probably fine for most purposes, but if you want to determine the results of your tests some other way, you would define your own analyzers wrappers.

It's important to note that prior to the analyzers lifecycle event, all callbacks pass any test failure as an initial error argument, but—because the built-in results plugin can ensure recorded results are passed to subsequent plugin wrappers' callbacks—any errors up to this point will be swallowed and replaced with null. If a subsequent plugin wrapper passes an error to its own callback function, it will be interpreted by teenytest as a fatal error, aborting the test run.

interceptors

Sometimes a plugin that plays a supervisory role actually requires knowledge of a test's results in order to determine if a failure occurred. A classic example of this (and perhaps the only use case) are things like "pending test" features, where tests flagged as works-in-progress or "pending" should fail (because they've been marked by the user as unfinished). As a result, a pending test interceptor might trigger a failure for any pending test that passes (perhaps indicating to the user they need to write a failing test or unflag the test as no longer pending).

reporters

Reporter wrappers come after all the other plugins, using the provided results callback to write results. By default, teenytest writes out TAP13 to standard out, but a custom reporter could format results any way it likes.

Invoking teenytest via the API

While it'd be unusual to need it, if you require('teenytest'), its exported function looks like:

teenytest(globOfTestPaths, [options], callback)

The function takes a glob pattern describing where your tests are located and an options object with a few simple settings. If your tests pass, the callback's second argument will be true. If your tests fail, it will be false.

Here's an example test script with every option set and a comment on the defaults:

#!/usr/bin/env node

var teenytest = require('teenytest')

teenytest('test/lib/**/*.js', {
  helperPath: 'test/helper.js', // module that exports test hook functions (default: null)
  output: console.log, // output for writing results
  cwd: process.cwd(), // base path for test globs & helper path,
  asyncTimeout: 5000 // milliseconds to wait before triggering failure of async tests & hooks
}, function(er, passing) {
  process.exit(!er && passing ? 0 : 1)
})

As you can see, the above script will bail with a non-zero exit code if the tests don't pass or if a fatal error occurs.

While the API is asynchronous, but both synchronous and asynchronous tests are supported.

teenytest's People

Contributors

agent-0028 avatar cpruitt avatar davemo avatar dependabot[bot] avatar giltayar avatar hanneskaeufler avatar jasonkarns avatar jkrems avatar joshtgreenwood avatar kscoulter avatar neall avatar rosston avatar searls avatar webstech avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

teenytest's Issues

Line numbers don't respect sourcemaps

When running teenytest file.js:10, the "10" should refer to the line number in whatever the original source was. If it's been compiled from TypeScript etc., that line number will be different. Teenytest should use source maps to trace back to the original line.

Likewise for exceptions; they point to line numbers in the built code, not the original source.

Glob missing files with hyphens

I'm not sure if this is an issue with teenytest or the underlying globbing library and I'm not in a position to easily troubleshoot the root cause so I thought I would start with an issue here.

Issue

Using a glob expression like test/**/*.test.js did not find and run a file that included a hyphen (-) in the name. It did run a test file with a name that was identical except with an underscore instead of a hyphen.

Example

I've created an example at https://github.com/amiel/teenytest-glob-issue. Please see the README.md there for instructions and descriptions of expected and actual behavior.

Large assertions are truncated

If you assert very large objects against each other, the stringified versions shown at the terminal are truncated. So there's no way to actually see what was being asserted other than instrumenting the code with console.log etc.

Here's an example with all identifiers anonymized. Search it for "..." to see where the truncation is happening. The actual difference between these two values isn't visible in this error message; it's somewhere after the "...".

# Failures:
#
#   1 - "default integration testname" - test #1 in `build/myfile/myfile.test.js`
#
#     AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal:
#
#     {
#       cs: [
#         {
#           ls: [
#             {
#               as: [],
#               ds: [
#                 'd1',
#                 'd2'
#               ],
#               name: 'l1',
#               ss: [
#                 {
#                   as: [
#                     'r'
#                   ],
#                   i: '1 + 1',
#                   kind: 'c',
#                   o: [
#                     '2'
#                   ]
#                 },
#                 {
#                   as: [],
#                   i: '2',
#                   kind: 'r...
#
#     should equal
#
#     {
#       cs: [
#         {
#           ls: [
#             {
#               as: [],
#               ds: [
#                 'd1',
#                 'd2'
#               ],
#               name: 'l1',
#               ss: [
#                 {
#                   as: [
#                     'r'
#                   ],
#                   i: '1 + 1',
#                   kind: 'c',
#                   o: [
#                     '2'
#                   ]
#                 },
#                 {
#                   as: [],
#                   i: '2',
#                   kind: 'r...
#         at Object.testname (/Users/grb/proj/myproj/build/myfile/myfile.test.js:9:20)
#         at /Users/grb/proj/myproj/node_modules/teenytest/lib/plugins/callbackify.js:14:21
#         at runX (/Users/grb/proj/myproj/node_modules/teenytest/lib/plugins/wrap.js:22:7)
#         at Object.userFunction [as wrap] (/Users/grb/proj/myproj/node_modules/teenytest/plugins/uncaught-exception.js:16:9)
#         at callable (/Users/grb/proj/myproj/node_modules/teenytest/lib/plugins/wrap.js:29:24)
#         at runX (/Users/grb/proj/myproj/node_modules/teenytest/lib/plugins/wrap.js:22:7)
#         at Object.userFunction [as wrap] (/Users/grb/proj/myproj/node_modules/teenytest/plugins/results.js:10:9)
#         at callable (/Users/grb/proj/myproj/node_modules/teenytest/lib/plugins/wrap.js:29:24)
#         at runX (/Users/grb/proj/myproj/node_modules/teenytest/lib/plugins/wrap.js:22:7)
#         at Object.userFunction [as wrap] (/Users/grb/proj/myproj/node_modules/teenytest/plugins/tap13/index.js:16:9)

Bind all hooks and each test to a context object

A context object (this) should be created and bound to each beforeEach, afterEach and test method, but not to beforeAll or afterAll (since it should be a fresh context object for each test)

Get the build onto windows

I mistakenly thought this repo had Appveyor set up already (like scripty does).

Things teenytest should do:

  • work on windows
  • its full build should work on windows, too

Trimming the error message from the stack trace.

Hi!

I've been wanting to write a simple test runner for myself for a long time, and have struggled hard trying not to go into that area. It seems that this project is very close to tick all the boxes on my test-runner wishlist, so I'm really excited about trying it out.

One thing that I have noticed in the very brief period I have been trying it out, is the duplication of error messages in the output for failed tests. Your example in the readme shows it nicely:

TAP version 13
1..1
not ok 1 - "blueIsRed" - test #1 in `test/lib/single-function.js`
  ---
  message: 'blue' == 'red'
  stacktrace: AssertionError: 'blue' == 'red'
    at blueIsRed (teenytest/example/test/lib/single-function.js:4:10)
    at teenytest/index.js:47:9
    ...
    at Module._compile (module.js:434:26)
  ...

That works okay and is not really that annoying for assertions with only a single line of output. It get's worse when you use a wordier assertion framework (unexpected in my case) - given this test case:

const expect = require('unexpected');

exports.fooTest = () => expect({ foo: 'bar' }, 'to equal', { foo: 'baz' });

I get the following output (with pretty colors of course ;-)):

TAP version 13
1..1
not ok 1 - "fooTest" - test #1 in `test/lib/unexpected.js`
  ---
  message:
expected { foo: 'bar' } to equal { foo: 'baz' }

{
  foo: 'bar' // should equal 'baz'
             //
             // bar
             // baz
}

  stacktrace: UnexpectedError:
expected { foo: 'bar' } to equal{ foo: 'baz' }

{
  foo: 'bar' // should equal 'baz'
             //
             // bar
             // baz
}

    at exports.fooTest (/.../test/lib/unexpected.js:3:25)
    at /.../node_modules/teenytest/lib/plugins/callbackify.js:15:17
    at runX (/.../node_modules/teenytest/lib/plugins/wrap.js:22:7)
    at Object.supervisors.userFunction [as wrap] (/.../node_modules/teenytest/plugins/uncaught-exception.js:16:9)
    at _.assign.callable (/.../node_modules/teenytest/lib/plugins/wrap.js:29:24)
    at runX (/.../node_modules/teenytest/lib/plugins/wrap.js:22:7)
    at Object.analyzers.userFunction [as wrap] (/.../node_modules/teenytest/plugins/results.js:10:9)
    at _.assign.callable (/.../node_modules/teenytest/lib/plugins/wrap.js:29:24)
    at runX (/.../node_modules/teenytest/lib/plugins/wrap.js:22:7)
    at Object.reporters.userFunction [as wrap] (/.../node_modules/teenytest/plugins/tap13.js:14:9)
    at _.assign.callable (/.../node_modules/teenytest/lib/plugins/wrap.js:29:24)
    at /.../node_modules/async/lib/async.js:718:13
    at Immediate.iterate (/.../node_modules/async/lib/async.js:262:13)
    set UNEXPECTED_FULL_TRACE=true to see the full stack trace
  ...

What I would like is best expressed as:

err.stacktrace = err.stacktrace.replace(err.message, '');
// or to be more safe
err.stacktrace = err.replace(new RegExp('^' + err.message), '');

This would leave a leading linebreak and still make the stacktrace lineup nicely.

Support tests written as modules

Right now teenytest uses synchronous require to load test files. This doesn't work for .mjs (ESM) since module loading/execution is asynchronous. What mocha does is to require first, catch ERR_REQUIRE_ESM, and then attempt import of the file.

Test file:

// example.test.mjs
import assert from 'assert';

class Dog {
  bark(length) {
    return Array.from({ length }, (_, idx) => `Woof #${idx}`);
  }
}

export function beforeEach() {
  this.subject = new Dog('Sam');
}

export const bark = {
  once() {
    assert.deepEqual(this.subject.bark(1), ['Woof #0']);
  },
  twice() {
    assert.deepEqual(this.subject.bark(2), ['Woof #0', 'Woof #1']);
  },
};

export const tag = {
  frontSaysName() {
    assert.equal(this.subject.tag('front'), 'Hi, I am Sam');
  },
  backSaysAddress() {
    assert.equal(this.subject.tag('back'), 'And here is my address');
  },
};

Result:

$ teenytest examples/teenytest/test/example.test.mjs
internal/modules/cjs/loader.js:998
    throw new ERR_REQUIRE_ESM(filename);
    ^

Error [ERR_REQUIRE_ESM]: Must use import to load ES Module: /path/to/examples/teenytest/test/example.test.mjs
    at Module.load (internal/modules/cjs/loader.js:998:11)
    at Function.Module._load (internal/modules/cjs/loader.js:899:14)
    at Module.require (internal/modules/cjs/loader.js:1040:19)
    at require (internal/modules/cjs/helpers.js:72:18)
    at /path/to/node_modules/teenytest/lib/prepare/modules/load.js:11:22
    at arrayMap /path/to/node_modules/lodash/lodash.js:639:23)
    at Function.map /path/to/node_modules/lodash/lodash.js:9554:14)
    at module.exports /path/to/node_modules/teenytest/lib/prepare/modules/load.js:9:12)
    at /path/to/node_modules/teenytest/lib/prepare/modules/index.js:9:7
    at arrayMap /path/to/node_modules/lodash/lodash.js:639:23) {
  code: 'ERR_REQUIRE_ESM'
}

Run test by line #

Just like #4 but by line number

teenytest my-test.js:2

with my-test.js:

module.exports = {
  hereItIs: function () {},
  notHere: function () {}
}

to run only the test defined in that line (declaration or function body)

`done` could wrap callbacks for better failure messages

I have a problem with handling errors in async tests. Here's an example:

  module.exports = function (done) {
    curl('https://google.com', function (er, data) {
      assert.equal(data, 'lalala')
      done(er)
    })
  }

If the above callback is passed a truthy er, then done(er) will fail and print the error. Yay!

But… done will never be called and the er never printed 99% of the time, because the assert.equal call will raise first, probably with an unhelpful undefined !== 'lalala' failure that hides the real cause.

So what if teenytest's done callback API checked for errors first, then proxied the callback function with the assertions. That way any error could fail fast when er is set.

    curl('https://google.com', done.when(function (er, data) {
      assert.equal(data, 'lalala')
    })

And in cases where a truthy er is expected, triggering failure when er is falsey:

    curl('https://google.com', done.whenErrors(function (er, data) {
      assert.equal(data, 'lalala')
    })

Bonus: it'd save a line, and I'm always about saving a line.

Async detection through promises

I just stumbled across this repo and haven't played with it yet, but it looks pretty cool. One idea I had upon reading the README is that instead of requiring a callback parameter for async tests, performing promise detection would be a convenient way to also detect an async test. Perhaps using a package like is-promise

make the code user friendly?

Cool stuff bro! And fast!!! But what scares me to hell is the tons of extra code you need to add to due a simple test! What about making it user friendly bro?

  • Instead of forcing closures on the end user. Simply return function () {} and you do the heavy stuff out of sight for the end-dev? as in jasmine, mocha etc.
  • improved logging - see mocha. Maybe you could add a reporter option?
  • in the source. test/helper. Simplify this files so they are not needed? Let the CLI handle this kind of stuff? Should it be needed to call the teenytest main function from here once more when the CLI does it? See helper.js.

logger-factory.js. Can't this be baked into the source? Let the asserts be a simple plugin that include assert.js. And plugin options for other to create their own assert library if needed?

I think bro all this things scare people off. I guess your intention is that end-devs - not nerds - should use your absolutly great code?

What about debugging bro?

Other cool stuff to consider...

  • time stamp when test start. File size maybe, and the time consumed for each test / file or module.
  • support for testing other files then .js. Example coffee, typescript ( hot topic this days) and livescript, dart etc.

Damn cool bro if I cun only run the test directly without extra code!!!

And cool cool stuff bro! I'm loving it!!

Run test by name

It'd be nice if a single test is provided to just:

teenytest my-test.js#hereItIs

with my-test.js:

module.exports = {
  hereItIs: function () {},
  notHere: function () {}
}

to run only that test and not the others.

Add --help output and man docs.

I frequently forget the CLI options (specifically for filtering) that teenytest supports.

The immediate next step is either teenytest --help or man teenytest. Neither of which are currently supported:

$(npm bin)/teenytest --help
TAP version 13
1..0
# Test run passed!
#   Passed: 0
#   Failed: 0
#   Total:  0

NOT window compatible

Hey bro! A new issue! I switched to windows today! Here is my findings, so I quit this code!
Sorry bro! Not so awsome code anymore :(

Running test in "test/nested-select-by-test-name-test.js"
Running test in "test/nested-test.js"
Running test in "test/plugin-configuration-test.js"
Uncaught error:
98 == 42
AssertionError: 98 == 42

Uncaught error:
spawn npm ENOENT
Error: spawn npm ENOENT
    at exports._errnoException (util.js:1008:11)
    at Process.ChildProcess._handle.onexit (internal/child_process.js:182:32)
    at onErrorNT (internal/child_process.js:348:16)
    at _combinedTickCallback (internal/process/next_tick.js:74:11)
    at process._tickCallback (internal/process/next_tick.js:98:9)
Uncaught error:
spawn npm ENOENT
Error: spawn npm ENOENT
    at exports._errnoException (util.js:1008:11)
    at Process.ChildProcess._handle.onexit (internal/child_process.js:182:32)
    at onErrorNT (internal/child_process.js:348:16)
    at _combinedTickCallback (internal/process/next_tick.js:74:11)
    at process._tickCallback (internal/process/next_tick.js:98:9)
Uncaught error:
spawn npm ENOENT
Error: spawn npm ENOENT
    at exports._errnoException (util.js:1008:11)
    at Process.ChildProcess._handle.onexit (internal/child_process.js:182:32)
    at onErrorNT (internal/child_process.js:348:16)
    at _combinedTickCallback (internal/process/next_tick.js:74:11)
    at process._tickCallback (internal/process/next_tick.js:98:9)
Uncaught error:
spawn npm ENOENT
Error: spawn npm ENOENT
    at exports._errnoException (util.js:1008:11)
    at Process.ChildProcess._handle.onexit (internal/child_process.js:182:32)
    at onErrorNT (internal/child_process.js:348:16)
    at _combinedTickCallback (internal/process/next_tick.js:74:11)
    at process._tickCallback (internal/process/next_tick.js:98:9)
You fail!

How did I trigger this bro_
This isn't valid in Windows

"test:unit": "./test/support/runner.js",

so I tried with

"test:unit": "node test/support/runner.js",

and

"test:unit": "node ./test/support/runner.js",

Neh! More and more I'm testing your code bro, my feelings are crushed :( :( :(

Errors in your async implementation

Hey, dude! Here are some nasty bugs!

build-test-actions.js. You are here using agrument ala {}[] Read object - array. In async dependency - the Dictionary function. The accepted argument is Dictionary<AsyncFunction<{}>>
You see the difference bro? This is one of the things causing issues on Windows, and make your code fail.
The same issue is found in this file, bro: build-test-actions.js 2 times here.

In this build-test-actions file. You are trying to use reverse. Latest Lodash doesnt have that.
This async issue occur many times, bro :(
I bug tracked this for you know, bro! I hope you fix it because I still enjoy your awsome code!!

I also investigated the tons of extra code you are using as helper and runner in the test folder, bro! If you refactor you can move this code to the lib folder.
And calling store.js from the test folder? Isn't that a internal function, bro? Else you need to document it ! Better if you do a refactor here, bro!

Is it possible to do a refactor so you can simply do:

teenytest(/**/*.js, {}, function() {}

and if async bro, the function could have one argument - done().

Then your internal function could return what you see here, bro:

https://github.com/testdouble/teenytest/blob/master/test/support/helper.js#L16

Then some of this arguments could be sent to a reporter base function that would output a log after calling a plugin - TAP13 or another reporter.

Ideal could be, bro if you look at Tape. That the factory returns the code ala 't' in tape.
A result would then be this, bro

teenytest(/**/*.js, {}, function(t) {
   t.equal('red')
}

where 't' in this case is your 'blue'. This bro would give option for assert library plugins, and behind the scene you use NodeJS native library by default.
So if you use async bro, it could be like this

teenytest(/**/*.js, {}, function(t, done) {
   t.equal('red')
}

That is userfriendly, bro!! What do you think?

I probably know, bro that you ain't going to fix anything more on this code. So I will move one to some other libs now.

For NodeJS modules bro, you can do similar, but third arg for teenyTest would be object not a function bro.

Plugin registration API such that plugins are registered in the helper?

Realizing that plugins are trivially registered via the CLI or package.json... However, the best part of teenytest is its conventional defaults. As such I want my CLI invocation to just be teenytest and I don't want to have to add a teenytest stanza to my package.json.

Alternatively, I'm thinking the ability to easily register plugins within the conventional test helper would be nice? (Likely this would really just be the facility to configure anything in teenytest, not just plugins.)

require('teenytest').plugins returns the plugin store, but plugins can't be registered until a run has begun. However, invoking the main exported function kicks off the run (which has technically already begun since we're in the helper file.)

Wondering if it makes sense for the test helper to have access to the currently in-process test run? I have lots of ideas here but they're all kinda kludgy. Has anyone else thought about this at all?

possible options:

  • expose in-process test run on the main export (such that require('teenytest') during a run has access to manipulate the current configuration)
  • allow the test helper to export a function that is invoked at the beginning of the run that can manipulate the configuration
  • allow require('teenytest').plugins to reference an in-process test run. (I'm already confused as to why this is exported as it currently is, when it's really an ephemeral property instantiated during the run.)

Support human-readable output

Right now, test run output is in TAP format, with a second human-readable section at the bottom. Human/TAP output should be toggleable. Probably human is the most reasonable default, with TAP behind a command line flag.

run a single test even if a glob is provided

Often I have a script config like:

    "test": "teenytest --helper support/helper.js \"smells/**/*.js\" ",

It'd be nice if I could npm test -- path/to/test.js to run just that test.

allow for nested tests

object-exporting tests should allow nesting. Given a test:

module.exports = {
  beforeAll: function () { console.log('A') },
  beforeEach: function () { console.log('B') },
  test1: function () { console.log('C') },
  sub: {
    beforeAll: function () { console.log('D') },
    beforeEach: function () { console.log('E') },
    test2: function () { console.log('F') },
    test3: function () { console.log('G') },
    afterEach: function () { console.log('H') },
    afterAll: function () { console.log('I') }
  },
  afterEach: function () { console.log('J') },
  afterAll: function () { console.log('K') }
}

If we ran this, we should see some output like:

A
B
C
J
D
B
E
F
H
J
B
E
G
H
J
I
K

Allow null hook functions

Right now global helper & undefined user hooks are replaced with no-op hooks, which in turn results in every plugin being invoked for a no-op function, which is a huge waste. Instead, let hooks be undefined/null and just be smart enough to skip them when building test actions

ci: test:unit is not testing with the local teenytest

While debugging a teenytest ci issue with globing it appeared that running test:unit was installing teenytest from npm and not using the local copy with its changes.

To get around this, an npm pack can be done and that file used for the test:unit install. Here is a sample implementation that works with bash and Windows cmd.

The javascript provides platform independence. It could be moved to a separate file.

If this is of interest, I will open a pull request. This also allows validation of the package before publishing.

ignored glob option (e.g. in node_modules)

just had an unfortunate experience where my test glob "*/.test.js" caught one of my dependency's tests in node_modules. Guess we need an ignore pattern support that default ignores anything in node_modules?

Blech.

refactor all the sync calls

build-test-modules has an fs.readFileSync and glob.sync call b/c it was originally refactored out of a sync function and changing its contract will require the top-level func to be refactored

Hook failure don't make subsequent tests 'not ok'

Given this test:

module.exports = {
  beforeEach: function () {
    throw new Error('Bad hook do not run!')
  },
  thisIsNotOk: function () {}
}

You get this output:

TAP version 13
1..1
 An error occurred in test hook: module beforeEach defined in `test/fixtures/hook-fail.js`
  ---
  message: Bad hook do not run!
  stacktrace: Error: Bad hook do not run!
    at module.exports.beforeEach (/Users/justin/code/testdouble/teenytest/test/fixtures/hook-fail.js:5:11)
    ………
  ...
ok 1 - "thisIsNotOk" - test #1 in `test/fixtures/hook-fail.js`

Which exits code 1 (good), however the "thisIsNotOk" test should absolutely read not ok, right? Seems bizarre.

CLI should take multiple paths and not a glob pattern

When invoked via the CLI, take paths like a normal unix app would, rather than ask users to provide globs in quotes.

This will probably be a breaking change demanding a major bump, but it's the right thing to do to make the cli behave conventionally

Glob expansion in shell prevents running all intended tests

If you run teenytest test/**/*.js in a terminal, the shell will expand that glob automatically, and the argvOptions function will pull the first from that list to run. This is surprising, though there's a workaround: prevent shell expansion with single quotes: teenytest 'test/**/*.js'. This allows the prepare function to expand the glob as intended.

No support for --bail

I'd like to add support for a --bail cli parameter or a plugin that does that or something in my helper.. Basically it should stop testing the moment a test fails. I've read through the documentation on writing plugins, but it isn't immediately obvious to me how I would implement that. Any tips?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.