Giter Club home page Giter Club logo

reassure's Introduction

Reassure

Performance testing companion for React and React Native.

Callstack x Entain

Read The Docs


The problem

You want your React Native app to perform well and fast at all times. As a part of this goal, you profile the app, observe render patterns, apply memoization in the right places, etc. But it's all manual and too easy to unintentionally introduce performance regressions that would only get caught during QA or worse, by your users.

This solution

Reassure allows you to automate React Native app performance regression testing on CI or a local machine. In the same way, you write your integration and unit tests that automatically verify that your app is still working correctly, you can write performance tests that verify that your app is still working performantly.

You can think about it as a React performance testing library. In fact, Reassure is designed to reuse as much of your React Native Testing Library tests and setup as possible.

Reassure works by measuring render characteristics – duration and count – of the testing scenario you provide and comparing that to the stable version. It repeats the scenario multiple times to reduce the impact of random variations in render times caused by the runtime environment. Then, it applies statistical analysis to determine whether the code changes are statistically significant. As a result, it generates a human-readable report summarizing the results and displays it on the CI or as a comment to your pull request.

In addition to measuring component render times it can also measure execution of regular JavaScript functions.

Installation and setup

To install Reassure, run the following command in your app folder:

Using yarn

yarn add --dev reassure

Using npm

npm install --save-dev reassure

You will also need a working Jest setup as well as one of either React Native Testing Library or React Testing Library.

You can check our example projects:

Reassure will try to detect which Testing Library you have installed. If both React Native Testing Library and React Testing Library are present, it will warn you about that and give precedence to React Native Testing Library. You can explicitly specify Testing Library to be used by using configure option:

configure({ testingLibrary: 'react-native' });
// or
configure({ testingLibrary: 'react' });

You should set it in your Jest setup file, and you can override it in particular test files if needed.

Writing your first test

Now that the library is installed, you can write your first test scenario in a file with .perf-test.js/.perf-test.tsx extension:

// ComponentUnderTest.perf-test.tsx
import { measurePerformance } from 'reassure';
import { ComponentUnderTest } from './ComponentUnderTest';

test('Simple test', async () => {
  await measurePerformance(<ComponentUnderTest />);
});

This test will measure render times of ComponentUnderTest during mounting and resulting sync effects.

Note: Reassure will automatically match test filenames using Jest's --testMatch option with value "<rootDir>/**/*.perf-test.[jt]s?(x)". However, if you want to pass a custom --testMatch option, you may add it to the reassure measure script to pass your own glob. More about --testMatch in Jest docs

Writing async tests

If your component contains any async logic or you want to test some interaction, you should pass the scenario option:

import { measurePerformance } from 'reassure';
import { screen, fireEvent } from '@testing-library/react-native';
import { ComponentUnderTest } from './ComponentUnderTest';

test('Test with scenario', async () => {
  const scenario = async () => {
    fireEvent.press(screen.getByText('Go'));
    await screen.findByText('Done');
  };

  await measurePerformance(<ComponentUnderTest />, { scenario });
});

The body of the scenario function is using familiar React Native Testing Library methods.

In case of using a version of React Native Testing Library lower than v10.1.0, where screen helper is not available, the scenario function provides it as its first argument:

import { measurePerformance } from 'reassure';
import { fireEvent } from '@testing-library/react-native';

test('Test with scenario', async () => {
  const scenario = async (screen) => {
    fireEvent.press(screen.getByText('Go'));
    await screen.findByText('Done');
  };

  await measurePerformance(<ComponentUnderTest />, { scenario });
});

If your test contains any async changes, you will need to make sure that the scenario waits for these changes to settle, e.g. using findBy queries, waitFor or waitForElementToBeRemoved functions from RNTL.

For more examples, look into our example apps:

Measuring test performance

To measure your first test performance, you need to run the following command in the terminal:

yarn reassure

This command will run your tests multiple times using Jest, gathering performance statistics and will write them to .reassure/current.perf file. To check your setup, check if the output file exists after running the command for the first time.

Note: You can add .reassure/ folder to your .gitignore file to avoid accidentally committing your results.

Reassure CLI will automatically try to detect your source code branch name and commit hash when you are using Git. You can override these options, e.g. if you are using a different version control system:

yarn reassure --branch [branch name] --commit-hash [commit hash]

Write performance testing script

To detect performance changes, you must measure the performance of two versions of your code current (your modified code) and baseline (your reference point, e.g. main branch). To measure performance on two branches, you must switch branches in Git or clone two copies of your repository.

We want to automate this task to run on the CI. To do that, you will need to create a performance-testing script. You should save it in your repository, e.g. as reassure-tests.sh.

A simple version of such script, using a branch-changing approach, is as follows:

#!/usr/bin/env bash
set -e

BASELINE_BRANCH=${BASELINE_BRANCH:="main"}

# Required for `git switch` on CI
git fetch origin

# Gather baseline perf measurements
git switch "$BASELINE_BRANCH"
yarn install --force
yarn reassure --baseline

# Gather current perf measurements & compare results
git switch --detach -
yarn install --force
yarn reassure

CI setup

To make setting up the CI integration and all prerequisites more convenient, we have prepared a CLI command to generate all necessary templates for you to start with.

Simply run:

yarn reassure init

This will generate the following file structure

├── <ROOT>
│   ├── reassure-tests.sh
│   ├── dangerfile.ts/js (or dangerfile.reassure.ts/js if dangerfile.ts/js already present)
│   └── .gitignore

Options

You can also use the following options to adjust the script further

--verbose (optional)

This is one of the options controlling the level of logs printed into the command prompt while running reassure scripts. It will

--silent (optional)

Just like the previous, this option also controls the level of logs. It will suppress all logs besides explicit errors.

Scaffolding

CI Script (reassure-tests.sh)

Basic script allowing you to run Reassure on CI. More on the importance and structure of this file in the following section.

Dangerfile

If your project already contains a dangerfile.ts/js, the CLI will not override it in any way. Instead, it will generate a dangerfile.reassure.ts/js file, allowing you to compare and update your own at your convenience.

.gitignore

If the .gitignore file is present and no mentions of reassure appear, the script will append the .reassure/ directory to its end.

CI script (reassure-tests.sh)

To detect performance changes, you must measure the performance of two versions of your code current (your modified code) and baseline (your reference point, e.g. main branch). To measure performance on two branches, you must switch branches in Git or clone two copies of your repository.

We want to automate this task to run on the CI. To do that, you will need to create a performance-testing script. You should save it in your repository, e.g. as reassure-tests.sh.

A simple version of such script, using a branch-changing approach, is as follows:

#!/usr/bin/env bash
set -e

BASELINE_BRANCH=${BASELINE_BRANCH:="main"}

# Required for `git switch` on CI
git fetch origin

# Gather baseline perf measurements
git switch "$BASELINE_BRANCH"
yarn install --force
yarn reassure --baseline

# Gather current perf measurements & compare results
git switch --detach -
yarn install --force
yarn reassure

Integration

As a final setup step, you must configure your CI to run the performance testing script and output the result. For presenting output at the moment, we integrate with Danger JS, which supports all major CI tools.

Updating existing Dangerfile

You will need a working Danger JS setup.

Then add Reassure Danger JS plugin to your dangerfile:

// /<project_root>/dangerfile.reassure.ts (generated by the init script)

import path from 'path';
import { dangerReassure } from 'reassure';

dangerReassure({
  inputFilePath: path.join(__dirname, '.reassure/output.md'),
});

Creating Dangerfile

If you do not have a Dangerfile (dangerfile.js or dangerfile.ts) yet, you can use the one generated by the reassure init script without making any additional changes.

It is also in our example file Dangerfile.

Updating the CI configuration file

Finally, run both the performance testing script & danger in your CI config:

- name: Run performance testing script
  run: ./reassure-tests.sh

- name: Run Danger.js
  run: yarn danger ci
  env:
    GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

You can also check our example GitHub workflow.

The above example is based on GitHub Actions, but it should be similar to other CI config files and should only serve as a reference in such cases.

Note: Your performance test will run much longer than regular integration tests. It's because we run each test scenario multiple times (by default, 10) and repeat that for two branches of your code. Hence, each test will run 20 times by default. That's unless you increase that number even higher.

Assessing CI stability

We measure React component render times with microsecond precision during performance measurements using React.Profiler. This means the same code will run faster or slower, depending on the machine. For this reason, baseline & current measurements need to be run on the same machine. Optimally, they should be run one after another.

Moreover, your CI agent needs to have stable performance to achieve meaningful results. It does not matter if your agent is fast or slow as long as it is consistent in its performance. That's why the agent should not be used during the performance tests for any other work that might impact measuring render times.

To help you assess your machine's stability, you can use the reassure check-stability command. It runs performance measurements twice for the current code, so baseline and current measurements refer to the same code. In such a case, the expected changes are 0% (no change). The degree of random performance changes will reflect the stability of your machine. This command can be run both on CI and local machines.

Normally, the random changes should be below 5%. Results of 10% and more are considered too high, meaning you should work on tweaking your machine's stability.

Note: As a trick of last resort, you can increase the run option from the default value of 10 to 20, 50 or even 100 for all or some of your tests, based on the assumption that more test runs will even out measurement fluctuations. That will, however, make your tests run even longer.

You can refer to our example GitHub workflow.

Analyzing results

Markdown report

Looking at the example, you can notice that test scenarios can be assigned to certain categories:

  • Significant Changes To Duration shows test scenarios where the performance change is statistically significant and should be looked into as it marks a potential performance loss/improvement
  • Meaningless Changes To Duration shows test scenarios where the performance change is not statistically significant
  • Changes To Count shows test scenarios where the render or execution count did change
  • Added Scenarios shows test scenarios which do not exist in the baseline measurements
  • Removed Scenarios shows test scenarios which do not exist in the current measurements

API

Measurements

measurePerformance function

Custom wrapper for the RNTL render function responsible for rendering the passed screen inside a React.Profiler component, measuring its performance and writing results to the output file. You can use the optional options object that allows customizing aspects of the testing

async function measurePerformance(
  ui: React.ReactElement,
  options?: MeasureOptions,
): Promise<MeasureResults> {

MeasureOptions type

interface MeasureOptions {
  runs?: number;
  warmupRuns?: number;
  wrapper?: React.ComponentType<{ children: ReactElement }>;
  scenario?: (view?: RenderResult) => Promise<any>;
  writeFile?: boolean;
}
  • runs: number of runs per series for the particular test
  • warmupRuns: number of additional warmup runs that will be done and discarded before the actual runs (default 1).
  • wrapper: React component, such as a Provider, which the ui will be wrapped with. Note: the render duration of the wrapper itself is excluded from the results; only the wrapped component is measured.
  • scenario: a custom async function, which defines user interaction within the UI by utilising RNTL or RTL functions
  • writeFile: (default true) should write output to file.

measureFunction function

Allows you to wrap any synchronous function, measure its execution times and write results to the output file. You can use optional options to customize aspects of the testing. Note: the execution count will always be one.

async function measureFunction(
  fn: () => void,
  options?: MeasureFunctionOptions
): Promise<MeasureResults> {

MeasureFunctionOptions type

interface MeasureFunctionOptions {
  runs?: number;
  warmupRuns?: number;
}
  • runs: number of runs per series for the particular test
  • warmupRuns: number of additional warmup runs that will be done and discarded before the actual runs.

Configuration

Default configuration

The default config which will be used by the measuring script. This configuration object can be overridden with the use of the configure function.

type Config = {
  runs?: number;
  warmupRuns?: number;
  outputFile?: string;
  verbose?: boolean;
  testingLibrary?:
    | 'react-native'
    | 'react'
    | { render: (component: React.ReactElement<any>) => any; cleanup: () => any };
};
const defaultConfig: Config = {
  runs: 10,
  warmupRuns: 1,
  outputFile: '.reassure/current.perf',
  verbose: false,
  testingLibrary: undefined, // Will try auto-detect first RNTL, then RTL
};

runs: the number of repeated runs in a series per test (allows for higher accuracy by aggregating more data). Should be handled with care.

  • warmupRuns: the number of additional warmup runs that will be done and discarded before the actual runs. outputFile: the name of the file the records will be saved to verbose: make Reassure log more, e.g. for debugging purposes testingLibrary: where to look for render and cleanup functions, supported values 'react-native', 'react' or object providing custom render and cleanup functions

configure function

function configure(customConfig: Partial<Config>): void;

The configure function can override the default config parameters.

resetToDefault function

resetToDefault(): void

Reset the current config to the original defaultConfig object

Environmental variables

You can use available environmental variables to alter your test runner settings.

  • TEST_RUNNER_PATH: an alternative path for your test runner. Defaults to 'node_modules/.bin/jest' or on Windows 'node_modules/jest/bin/jest'
  • TEST_RUNNER_ARGS: a set of arguments fed to the runner. Defaults to '--runInBand --testMatch "<rootDir>/**/*.perf-test.[jt]s?(x)"'

Example:

TEST_RUNNER_PATH=myOwnPath/jest/bin yarn reassure

External References

Contributing

See the contributing guide to learn how to contribute to the repository and the development workflow.

License

MIT

Made with ❤️ at Callstack

Reassure is an Open Source project and will always remain free to use. The project has been developed in close partnership with Entain and was originally their in-house project. Thanks to their willingness to develop the React & React Native ecosystem, we decided to make it Open Source. If you think it's cool, please star it 🌟

Callstack is a group of React and React Native experts. If you need help with these or want to say hi, contact us at [email protected]!

Like the project? ⚛️ Join the Callstack team who does amazing stuff for clients and drives React Native Open Source! 🔥

reassure's People

Contributors

adhorodyski avatar dependabot[bot] avatar dereklucas avatar edenizk avatar gedu avatar gianpaj avatar josemak25 avatar kbieganowski avatar lukewalczak avatar mdjastrzebski avatar renovate[bot] avatar retyui avatar shaswatprabhat avatar thymikee avatar tmaszko avatar xiltyn avatar zacharyfmarion avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

reassure's Issues

[FEATURE] display compare results to console after `reassure measure` if both baseline & current results are present

Is your feature request related to a problem? Please describe.
When working locally user has to issue separate steps of reassure measure and reassure compare to get the measurement results.

Describe the solution you'd like
If after running reassure measure there are both current and baseline performance measurements, then automatically display compare results to the console. Probably not worth generating MD & JSON output in such cases.

Add --no-compare flag to disable the behaviour (i.e. just measure and do not display compare results to console).

Describe alternatives you've considered

  1. Not implement it at all and require separate measure and compare for local workflow

Additional context
V8 test suite behaves in a similar manner, they do not even have separate compare script in their normal workflow but just do it with the current code measure setup.

[FEATURE] capture branch/commit metadata in perf results file

Is your feature request related to a problem? Please describe.
Currently our current.perf and baseline.perf contain only performance results. Hence having such files does not allow you to tell on which code version were these results gathered. Having that info would tag our measurements with code version.

Describe the solution you'd like

  • First line of .perf file could contain { metadata: { branch: 'branchname', hash: 'commitHash' }} as first line in the file that would not contain regular performance measurement entries but only metadata related to the measurements.
  • Since do not to want to directly integrate reassure measure which creates these files with any version control system, these could be passed as runtime flags, eg. reassure measure --branch branchname --hash commitHash.
  • User would pass this data from his reassure-tests.sh script, on CI and perhaps locally

Describe alternatives you've considered
None

Additional context
None

[FEATURE] Validate performance results files input to compare package

Is your feature request related to a problem? Please describe.

Currently reading of performance file by compare package does not perform any validation, so the CLI could crash with cryptic errors if file is corrupted.

Describe the solution you'd like
Use zod or other hight quality library to parse/validate contents of the results file. This will require defining schemas for header and entry rows and should be configured to work with TypeScript. In case of error the information about the incorrect line should be displayed.

Describe alternatives you've considered
Write validation by hand, but that would be too not worth the tradeoff of additional deps

Additional context
N/A

[BUG] Does not work with pnpm

Describe the bug
When using pnpm as a package manager reassure binary cannot be found.
After running command pnpm reassure i get:

 ERR_PNPM_RECURSIVE_EXEC_FIRST_FAIL  not found: reassure

I noticed that reassure binary is missing in the node_modules/.bin folder.

To Reproduce
Steps to reproduce the behavior:

  1. Create react project with create-react-app
  2. Add reassure to the project using pnpm add reassure -D
  3. run pnpm reassure

refactor: simplify high-level test API

Currently perf tests consist of measureRender and writeStats calls (+ dummy expect). In order to simplify the API, a single method should be created, e.g. async function perfTest(jsx, options).

[FEATURE] automate `branch` and `commit hash` CLI options

Is your feature request related to a problem? Please describe.

Automatically detect branch name and commit hash when reassure detects that it is run inside a Git repo.

Describe the solution you'd like

Basically make current --branch $(git branch --show-current) --commitHash $(git rev-parse HEAD) options automatic when user is using Git.
Kept the existing CLI options to allow user to override the default values and/or support other source controls systems.
Invoke git binary commands using node child_process(?) API to avoid external deps.

Describe alternatives you've considered

  • Instead of invoking external binary use some node package like simple-git to communicate with GIT repo.

Additional context

[BUG]: measure code is running under incorrect Node.js configuration.

Describe the bug
running command yarn reassure fails with error though the test completes successfully

Reassure: measure code is running under incorrect Node.js configuration.
    Performance test code should be run in Jest with certain Node.js flags to increase measurements stability.
    Make sure you use the Reassure CLI and run it using "reassure" command.

I have created a fresh Expo SDK 48 project and added reassure by following official tutorial. I have set a simple component:

export default function Index() {
    return (
        <View style={{ flex: 1, justifyContent: 'center', alignItems: 'center' }}>
            <Text>hello world</Text>
        </View>
    )
}

Then I setup a simple performance measurement test:

import { measurePerformance } from 'reassure';
import Component from '__path_to_component';

test('auth performance', async () => {
    await measurePerformance(<Component />);
});

Running yarn reassure throw above error:
image

To Reproduce
Steps to reproduce the behavior:

  1. Create new Expo SDK 48 project
  2. Add jest, jest-expo, @testing-library/react-native and configure accordingly
  3. Run a simple performance test and yarn reassure
  4. Observe above warning/error

Expected behavior
Execution should complete without warning or error

Desktop (please complete the following information):

  • OS: MacOS Monterey

Additional context

"reassure": "^0.7.1"
"expo": 48.0.1,
"react-native": "0.71.3",   
"jest": "^29.2.1",
"jest-expo": "^48.0.1",
"@testing-library/jest-native": "^5.4.2",

jest.config.json

module.exports = {
    preset: "jest-expo",
    setupFilesAfterEnv: ["<rootDir>/jest.setup.tsx"],
    transformIgnorePatterns: [
        "node_modules/(?!((jest-)?react-native|@react-native(-community)?)|expo(nent)?|@expo(nent)?/.*|@expo-google-fonts/.*|react-navigation|@react-navigation/.*|@unimodules/.*|unimodules|sentry-expo|native-base|react-native-svg)"
    ]
}

[BUG] Windows support | error SyntaxError: missing ) after argument list

Describe the bug
I am not sure if this issue supposes to be a feature or a bug.
When I am running the reassure on windows (tested on 11 and 10), getting this error:
image

To Reproduce
Steps to reproduce the behavior:

  1. Clone Reassure example in windows
  2. Install packages
  3. Run yarn reassure
  4. See the error

Expected behavior
For now I am setting the TEST_RUNNER_PATH to node_modules/jest/bin/jest

Screenshots
After setting TEST_RUNNER_PATH:
image

Desktop (please complete the following information):

  • Windows 11 | node: v16.13.0

Additional context
If anyone else has the same problem my solution was to add the cross-env package and set to package.json script to this
"perf-test": "cross-env TEST_RUNNER_PATH=node_modules/jest/bin/jest reassure"

monorepo: cleanup deps

Description

Since migration to monorepo setup, there are some unnecessary deps packages extracted from single large package.

Scope

  • review deps, devDeps & peerDeps in all packages, remove unnecessary ones

[FEATURE] Add ability to pass --inspect flag to node to debug test execution

Is your feature request related to a problem? Please describe.

When tests are unstable locally, it is hard to figure out why this would be happening.

Describe the solution you'd like
A clear and concise description of what you want to happen.

When we have run into issues with unstable or long running tests it has been useful to pass an --inspect flag to the node child process (just by editing the lib commonjs folder in reassure-cli). It would be nice if we could just run yarn reassure --baseline --inspect and have that flag automatically added. Or we could use the --inspect-brk option, or both. Assuming (but have not confirmed) this has the potential to affect the stability of test results, so we could have a big warning saying something along the lines of "You are running node with the --inspect flag, this should not be used in CI to measure performance as it can result in greater render variance".

I'd be happy to put up a PR with the change and some documentation.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Editing node modules is a bit of a pain but it is fine if we don't want to support this in the library.

Additional context
Add any other context or screenshots about the feature request here.

[FEATURE] Next.js example setup

Is your feature request related to a problem? Please describe.
Create an example setup for Next.js app. Similar to our current examples/web-vite there would be examples/web-nextjs folder which would showcase how to connect Reassure with Next.js

Describe the solution you'd like
Generate a basic Next.js app, prefereably in TS, but could be also in JS there are any serious problems with TS. Then add Jest, RNTL and Reassure deps, and Reassure scripts to it. In order to have something to measure add the [SlowList])(https://github.com/callstack/reassure/blob/main/examples/web-vite/src/SlowList.perf-test.tsx) component and tests (both unit and perf) from web-vite example.

Make sure that you can run Reassure locally and that it does not generate warnings about not being able to set performance flags.

Voila! That's it.

Potential issues
There could be problems with making jest run unit/perf tests in some cases (they were some for Create React App, but non fore Vite web app).

Process exit code on test failure

Describe the improvement

It seems like the reassure-cli process always returns a success exit code no matter what is the result of the tests. Currently this makes it so the task that runs on CI always succeeds even if some code breaks the tests.

I think test failures usually mean something is wrong with the test vs a perf regression (unless maybe the test times out, although this could also be because of invalid scenario).

Scope of improvement

Return the error code returned by the jest process.

Suggested implementation steps

【Native moduels】:How to make the native modules executable by using reassure?

Yeah, I must say the Reassure is really an awesome tool and I decided to use it in our app development, but I'm facing a issue that I wonder how can I make the native modules work well, because you know, by using reassure, all the react native components are rendered in the Node.js environment, but as for a commercial app in real life, it definitely depends on many native modules, so could someone help tell me that what's the best practice of processing the native modules? Thanks.

Adjust the default `--testMatch` parameter passed to Jest to better reflect the internal patterns of Jest itself

Describe the improvement
Consider updating the default testMatch to better reflect
default patterns used in Jest and allow users to also place their
performance tests into specific /perf/ directory without the need
of adding the *.perf-test. prefix

[ **/__tests__/**/*.[jt]s?(x)", "**/?(*.)+(spec|test).[jt]s?(x)" ]

Scope of improvement
Update the measure command in reasure-cli package

from

'<rootDir>/**/*.perf-test.[jt]s?(x)'

to

[ "<rootDir>/**/__perf__/**/*.[jt]s?(x)", "<rootDir>/**/*.perf-test.[jt]s?(x)" ]

Suggested implementation steps
One quick PR should do, additionally, we need to make sure there we're properly handling testMatch in any other place since it would now be an array as per the Jest docs

[FEATURE] Build GH action to run perf tests

Is your feature request related to a problem? Please describe.
Currently we piggy back on Danger JS to offer support to popular CIs. This request is about building direct integration with GitHub actions.

Describe the solution you'd like

  • GH action that directly integrates with Reassure.
  • Separate package in the monorepo
  • Does not use Danger JS

Describe alternatives you've considered
None

Additional context
None

Remove/improve `scale` parameter from `measureRender`

In the current implementation we use scale parameter for "smoothing' the results mainly for small componences, rendering in < 10 ms, where 1 ms measurement grain can have large % impact on render times.

This method however is not statistically sound as it leads to smoothing/averaging of results for individual runs.

Proposed solutions:

  • remove scale parameter
  • allow for scale parameter that would cause n instances of the component rendered side-by-side (e.g. under common React.Fragment or View parent). This approach however seems to conflict with scenario param. Hence, the solution here might be to allow either scale or scenario but not both.

feature: unified binary

Currently Reassure contains two binaries:

  • reassure-test
  • reassure-stability

As well as two low-level scripts:

  • perf-measure
  • perf-compare

All of the above are written in bash.

Desired state:

  • single command accepting subcommands:
    • reassure measure
    • reassure compare
    • reassure check-stability
  • user script for coordinating whole testing process including git branch changes

The CLI should be written in Node.js for cross-platform (I am talking about you Windows) support instead of bash.

[FEATURE] Component render details

Describe the solution you'd like
We started using reassure to measure a particular screen's performance. To be able to render it in the right conditions, we currently render our entire app navigation container with all the necessary context providers (e.g. Jotai, RQ, NativeBase, etc...) and navigate to that problematic screen (i.e. open a modal screen).
We running that test, we see that it takes on average 500ms and 20 re-renders.
It'd be great to have a way to see what components took the longest and re-rendered the most during that test. Maybe we can specify a list of component names to track (e.g. high level screen components) or track the slowest 10 or only n level deep.

Describe alternatives you've considered
We're currently using the Profiler inside ReactDevTools (inside Flipper) to see those component render times. It's pretty good for debugging but it'd be great if we can have them from our automated perf tests as well.

Additional context
I'd be happy to contribute if you think this is a feasible feature to implement.

Thanks for this amazing tool. Render (and re-render) issues are the bottleneck of our app right now (we think NativeBase is not helping here). We're working hard on fixing some of these issues but we need a way to prevent future regressions. It's reassuring (pun intended) to see there's a tool that can warn us now.

[FEATURE] - Add unit tests for each package

Right now while developing features we need to build the project every time to check against examples provided.
If we have unit tests integrated then a lot of this effort might be reduced.
Also this will ensure no existing functionality breaks during feature development.

Maybe some Jest/Mocha based scripts for unit testing which can later be evolved to check coverage and integrate with CI.

Question - Reassure + Expo

Description

Im trying to use Reassure with Expo and it display some errors as you see below.

Captura de Pantalla 2022-07-01 a la(s) 14 21 51

Anyone have any idea if Reassure should work with Expo or the error its related with something different?

Environment

"expo": "~45.0.0",
"react-native": "0.68.2",
"react": "17.0.2",
"jest": "^26.6.3",
"jest-expo": "^45.0.1",
"@testing-library/jest-native": "^4.0.5",
"@testing-library/react-native": "^10.1.1",
"react-test-renderer": "^17.0.2",
"reassure": "^0.1.0",

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Awaiting Schedule

These updates are awaiting their schedule. Click on a checkbox to get an update now.

  • chore(deps): update dependency @testing-library/react-native to v12

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Ignored or Blocked

These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.

Detected dependencies

circleci
.circleci/config.yml
  • circleci/node 10
github-actions
.github/workflows/docs-deploy.yml
  • actions/checkout v3
  • actions/setup-node v3
  • peaceiris/actions-gh-pages v3
.github/workflows/main.yml
  • actions/checkout v3
  • actions/setup-node v3
.github/workflows/stability.yml
  • actions/checkout v3
  • actions/setup-node v3
.github/workflows/test-example-apps.yml
  • actions/checkout v3
  • actions/setup-node v3
  • actions/checkout v3
  • actions/setup-node v3
npm
docusaurus/package.json
  • @docusaurus/core 2.3.1
  • @docusaurus/preset-classic 2.3.1
  • @mdx-js/react ^1.6.22
  • clsx ^1.2.1
  • prism-react-renderer ^1.3.5
  • react ^17.0.2
  • react-dom ^17.0.2
  • @docusaurus/module-type-aliases 2.3.1
  • @tsconfig/docusaurus ^1.0.5
  • typescript ^4.7.4
  • node >=16.14
package.json
  • @babel/core ^7.20.12
  • @babel/runtime ^7.20.7
  • @callstack/eslint-config ^13.0.2
  • @changesets/cli ^2.26.0
  • @testing-library/react ^14.0.0
  • @testing-library/react-native ^11.5.1
  • @types/jest ^29.2.5
  • @types/react ^18.0.26
  • @types/react-native 0.71.3
  • babel-jest ^29.3.1
  • danger ^11.2.3
  • eslint ^8.32.0
  • eslint-config-prettier ^8.6.0
  • eslint-plugin-prettier ^4.2.1
  • jest ^29.3.1
  • pod-install ^0.1.38
  • prettier ^2.8.3
  • react 18.2.0
  • react-dom 18.2.0
  • react-native 0.71.3
  • react-native-builder-bob ^0.20.3
  • react-test-renderer 18.2.0
  • turbo ^1.6.3
  • typescript ^4.9.4
  • react *
packages/reassure-cli/package.json
  • simple-git ^3.16.0
  • yargs ^17.6.2
  • @types/yargs ^17.0.20
packages/reassure-compare/package.json
  • markdown-builder ^0.9.0
  • markdown-table ^2.0.0
  • zod ^3.20.2
  • babel-jest ^29.3.1
  • ts-jest ^29.0.5
packages/reassure-danger/package.json
packages/reassure-logger/package.json
  • chalk 4.1.2
packages/reassure-measure/package.json
  • mathjs ^11.5.0
  • react *
packages/reassure/package.json

  • Check this box to trigger a request for Renovate to run again on this repository

[FEATURE] Ability to run reassure for a single file

Is your feature request related to a problem? Please describe.
Is it possible to run reassure for a single file? I have a lot of perf-test.js files in my project and sometimes when I change one file it does not get automatically picked to be the 1st one

Describe the solution you'd like
Being able to run tests for a single file like npx reassure ./tests/example.perf-test.js

Describe alternatives you've considered
none

Additional context
none

[BUG] Fails to run inside a mono-repo

Describe the bug

Error: Cannot find module '/home/Projects/<mono-repo>/packages/benchmarks/node_modules/.bin/jest'

Trying to run reassure within an npm or yarn workspace fails. It appears the path to jest has been hardcoded but in a mono-repo the jest binary may be in the root node_modules.

[BUG] TEST_RUNNER_PATH not taken into account

Describe the bug
When specifying TEST_RUNNER_PATH, the specified path is not taken into account, the default path node_modules/.bin/jest is used.

I think the issue is in the measure command (measure.ts):

The current line that grabs the test runner path:

  const testRunnerPath =
    process.env.TEST_RUNNER_PATH ?? process.platform === 'win32'
      ? 'node_modules/jest/bin/jest'
      : 'node_modules/.bin/jest';

Should be:

  const testRunnerPath =
    process.env.TEST_RUNNER_PATH ?? (process.platform === 'win32'
      ? 'node_modules/jest/bin/jest'
      : 'node_modules/.bin/jest');

To Reproduce
Steps to reproduce the behavior:

  1. Launch TEST_RUNNER_PATH=somepath yarn reassure
  2. Node throw an error: node_modules/.bin/jestCannot be found

Expected behavior

The TEST_RUNNER_PATH environment variable should be used instead of the default.

Desktop:

  • OS: macOS
  • Version 13.1

React (web) support

Scope:

  • separate import for using RTL and React.js
  • duplicate renderMeasure using RTL render and RenderAPI
  • web example app (tests actually)

Hide meaningless changes behind details

And merge "insignificant" parts there as well. We can always mark them somehow, e.g. with a ⚠️ emoji to indicate this gets close to significant change.

[FEATURE] Setup Docusaurus docs

Is your feature request related to a problem? Please describe.
Currently we do not have documentation page, but only README. As the amount of documentation grows it would make sense to have it better organized into separate Getting started, API, guides, etc.

Describe the solution you'd like
Create a Docusaurus setup for Reassure! Regarding content we should start with having a single "Getting Started" page that will contain Readme.md file. We also need to have a typical sidebare where we would be able to add more docs pages in the future.

This documentation will be published using GH pages.

Issue: investigate random fluctuations to render count between runs when using VirtualizedLists

Summary

@TMaszko & @Xiltyn reported that under some circumstances render counts fluctuates by +/- 1 or +/- 2 between runs when applying Perf Tests on complex codebase. Theirs initial research pointed to update renders triggered by VirtualizedList/FlatList in situation of changing the underlying list data.

To Do

Investigate FlatList/etc impact of variable number of renders. Create test scenarios that try to replicate the random behavior and isolate it as much as possible in order to find root cause & possible solution or workaround.

[FEATURE] Add option to skip mount in measurePerformance

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

[BUG] measure code is running under incorrect Node.js configuration

I'm getting this message when running yarn reassure:

console.error
❌ Reassure: measure code is running under incorrect Node.js configuration.
Performance test code should be run in Jest with certain Node.js flags to increase measurements stability.
Make sure you use the Reassure CLI and run it using "reassure" command.

Same with npx reassure or yarn node --expose_gc $(yarn bin reassure).

reassure: v0.4.1
node: v14.17.6
yarn: v1.22.19
jest: v26.6.3
MBP M1
macOS: v12.4

License: resolve markdown builder copied code

We currently use markdown-table.ts which contains some code from Internet, please resolve licensing here.

Potential solutions:

  • replace the code with some NPM package (e.g. markdown-table)
  • provide proper license header if license is correct

[FEATURE] Ability to measure code execution time

Is your feature request related to a problem? Please describe.
Reassure can be though as general performance execution measurement and comparison tool. The current measurePerformance works on JSX element by rendering it with React.Profiler, which is run a number of times and saved to performance file. Next it is compared with baseline performance file using statistical tools.

All of the above steps, except rendering using React.Profiler are just working on render durations (& counts), and might as well serve to analyse non-render-related measurements.

Describe the solution you'd like
Add measureDuration function accepting a callback function with code to measure.

function measureDuration(callback: () => void, options) 

Function body would be similar to existing measurePerformance, but would swap rendering code for just running the callback.

User could optionally call following methods inside his code:

function timerStart(name):
function timerEnd(name):

Calling timerStart(name) would capture current timestamp, for given name, using performance.now() if available or Date.now otherwise
Calling timerEnd(name) would also capture the current timestamp for given name, but would additionally take end and start timestamp calculate their difference, and treat this as name event duration.

Having a test without using timerStart/timerEnd would add a single line to performance results file. The line would be in the same format as current performance entries. The name field would be take from Jest test name, as currently is for render tests. The durations-related fields would contain code execution duration, while count related fields would assume that count is 1.

If timerStart/End would be used then, each name entry should generate additional line:

  • name of that test would be Jest test name concatenated with timer name: {testName} - {timerName}
  • durations fields would contain sum of given timer durations from given test run
  • count fields would contain number of how many timers given timer has been run in the same test.

[FEATURE] Reassure init CLI command

Is your feature request related to a problem? Please describe.
Add init command to CLI. This comment would be invoked like this:

$ yarn reassure init

This command would apply common Reassure setup steps that need to be done by hand now:

Describe the solution you'd like
Before doing the changes mentioned above tool should check if reassure-tests.sh exists, and assume that if it exists then the folder hand been already initialized for Reassure

For subsequent steps (danger file, gitignore): they should be only carried if danger file does not yet exist, or if gitignore does not contain reassure entry.

The tool should be user friendly and have clear help messages.

Make sure tests are run in-band

Parallelizing tests create extra variance and uncertainty, making it hard to measure performance relibaly. For now, let's pass --runInBand option to Jest so it always run tests serially. Later on, once we have a fairly stable execution environment, we should figure out a way to parallelize tests across different CI workers (e.g. using Jest's --shard flag or reimplement it).

[FEATURE] Create integration with Bitrise

Bitrise offers easy-to-use "steps" to be included in their CI pipelines. This could potentially make it easier for people to adopt Reassure. Especially if they're on paid plans, in which case Bitrise promises to deliver dedicated machines spawning VMs that are exact copies of each other (which we could leverage for sharding tests in the future).

Let's explore creating such a step. Here's relevant docs: https://devcenter.bitrise.io/en/steps-and-workflows/developing-your-own-bitrise-step/developing-a-new-step.html

[BUG] native-expo example project not working properly

Describe the bug
After cloning the repo, yarn reassure yields a validation error

● Validation Error:

  Module @testing-library/jest-native/extend-expect in the setupFilesAfterEnv option was not found.
         <rootDir> is: /Users/proximity/IdeaProjects/reassure/examples/native-expo

  Configuration Documentation:
  https://jestjs.io/docs/configuration


❌  Something went wrong, current performance file (.reassure/current.perf) does not exist

✨  Done in 0.98s.

To Reproduce
Steps to reproduce the behavior:

  1. Clone repo
  2. Open native-expo project
  3. Run yarn
  4. Run yarn reassure

Expected behavior
The test run successfully

Screenshots
image

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Smartphone (please complete the following information):

  • Device: Macbook Pro 2019
  • OS: macOS 12.4

ci: automate typechecks on CI

Describe the improvement
We currently have TypeScript type checks however it is not automatically invoked on CI, hence code is not effectively check with it.

Scope of improvement

  • Add workflow/step on CI
  • Fix any present TS errors

Suggested implementation steps

  1. Fix TS errors locally
  2. Add workflow/workflow step on CI

ci: automate ES Lint checks on CI

Describe the improvement
We currently have ES Lint integration however it is not automatically invoked on CI, hence code is not effectively check with it.

Scope of improvement

  • Add workflow/step on CI
  • Fix any present ES Lint errors
  • Configure ESLint to work well with monorepo (there are some issues with generated code).

Suggested implementation steps

  1. Fix ES Lint errors locally
  2. Add workflow/workflow step on CI

[BUG] Tests pass but there is an error output

Describe the bug
After adding the lib, I tried to make a test of a component and after resolving some issues with my implementation I got the test to pass but with a console.error message:

 ● Console

    console.error
      ❌ Reassure: measure code is running under incorrect Node.js configuration.
      Performance test code should be run in Jest with certain Node.js flags to increase measurements stability.
      Make sure you use the Reassure CLI and run it using "reassure" command.

      27 |
      28 | test('ImageGallery', async () => {
    > 29 |   await measurePerformance(
         |         ^
      30 |     <Wrapper>
      31 |       <ImageGallery />
      32 |     </Wrapper>,

To Reproduce
Steps to reproduce the behavior:

  1. Add a test with all the providers required
  2. Run yarn reassure

Expected behavior
The test to pass successfully and without an error output

Screenshots
jest.config.js

module.exports = {
  ...
  preset: 'jest-expo',
  ...
};

image
jest.config.js

module.exports = {
  ...
  preset: '@testing-library/react-native',
  ...
};

image

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Smartphone (please complete the following information):

  • Device: Macbook Pro 2019
  • OS: macOS 12.4

Additional context

  • Related issue #174
  • I'm using Zustand, RNE and restyle
  • I tried adding the file from the native-expo example with no success (you may see them at the screenshots)

[FEATURE] Detect unnecessary renders

Is your feature request related to a problem? Please describe.
This would be an optional feature activated by measurePerformance option on per test level and configure option on global level. Option name could be detectRedundantRenders or something similar.

When turned on, measuring code would analyze the rendered output and notify user if render did result in the same user interface being generated, i.e. that render was redundant.

Describe the solution you'd like
When turned on, after each onRender callback from React.Profiler component measuring code would run .toJSON method from RNTL/RTL in order to generate host component representation of the output. Next it would compare the output with similar output generated on previous onRender callback and warn user if the output is the same, meaning that the render did not cause change to the host component representation of the UI, i.e. UI did not change.

refactor: extract `reassure-danger` package

Describe the improvement
Currently the Danger JS pluging code is in main reassure package. As we now have monorepo and plan to have multiple CI integrations it should be moved to its own reassure-danger package.

Scope of improvement

  • extract Danger JS plugin from reassure package to reassure-danger

Suggested implementation steps

  1. Extract plugin

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.