Giter Club home page Giter Club logo

test-reporter's Introduction

Test Reporter

This Github Action displays test results from popular testing frameworks directly in GitHub.

✔️ Parses test results in XML or JSON format and creates nice report as Github Check Run

✔️ Annotates code where it failed based on message and stack trace captured during test execution

✔️ Provides final conclusion and counts of passed, failed and skipped tests as output parameters

How it looks:

Supported languages / frameworks:

For more information see Supported formats section.

Do you miss support for your favorite language or framework? Please create Issue or contribute with PR.

Example

Following setup does not work in workflows triggered by pull request from forked repository. If that's fine for you, using this action is as simple as:

on:
  pull_request:
  push:
permissions:
  contents: read
  actions: read
  checks: write
jobs:
  build-test:
    name: Build & Test
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3     # checkout the repo
      - run: npm ci                   # install packages
      - run: npm test                 # run tests (configured to use jest-junit reporter)

      - name: Test Report
        uses: dorny/test-reporter@v1
        if: success() || failure()    # run this step even if previous step failed
        with:
          name: JEST Tests            # Name of the check run which will be created
          path: reports/jest-*.xml    # Path to test results
          reporter: jest-junit        # Format of test results

Recommended setup for public repositories

Workflows triggered by pull requests from forked repositories are executed with read-only token and therefore can't create check runs. To workaround this security restriction, it's required to use two separate workflows:

  1. CI runs in the context of the PR head branch with the read-only token. It executes the tests and uploads test results as a build artifact
  2. Test Report runs in the context of the repository main branch with read/write token. It will download test results and create reports

The second workflow will only run after it has been merged into your default branch (typically main or master), it won't run in a PR unless after the workflow file is part of that branch.

PR head branch: .github/workflows/ci.yml

name: 'CI'
on:
  pull_request:
jobs:
  build-test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3         # checkout the repo
      - run: npm ci                       # install packages
      - run: npm test                     # run tests (configured to use jest-junit reporter)
      - uses: actions/upload-artifact@v3  # upload test results
        if: success() || failure()        # run this step even if previous step failed
        with:
          name: test-results
          path: jest-junit.xml

default branch: .github/workflows/test-report.yml

name: 'Test Report'
on:
  workflow_run:
    workflows: ['CI']                     # runs after CI workflow
    types:
      - completed
permissions:
  contents: read
  actions: read
  checks: write
jobs:
  report:
    runs-on: ubuntu-latest
    steps:
    - uses: dorny/test-reporter@v1
      with:
        artifact: test-results            # artifact name
        name: JEST Tests                  # Name of the check run which will be created
        path: '*.xml'                     # Path to test results (inside artifact .zip)
        reporter: jest-junit              # Format of test results

Usage

- uses: dorny/test-reporter@v1
  with:

    # Name or regex of artifact containing test results
    # Regular expression must be enclosed in '/'.
    # Values from captured groups will replace occurrences of $N in report name.
    # Example:
    #   artifact: /test-results-(.*)/
    #   name: 'Test report $1'
    #   -> Artifact 'test-result-ubuntu' would create report 'Test report ubuntu'
    artifact: ''

    # Name of the Check Run which will be created
    name: ''

    # Comma-separated list of paths to test results
    # Supports wildcards via [fast-glob](https://github.com/mrmlnc/fast-glob)
    # All matched result files must be of the same format
    path: ''

    # The fast-glob library that is internally used interprets backslashes as escape characters.
    # If enabled, all backslashes in provided path will be replaced by forward slashes and act as directory separators.
    # It might be useful when path input variable is composed dynamically from existing directory paths on Windows.
    path-replace-backslashes: 'false'

    # Format of test results. Supported options:
    #   dart-json
    #   dotnet-trx
    #   flutter-json
    #   java-junit
    #   jest-junit
    #   mocha-json
    #   rspec-json
    reporter: ''

    # Allows you to generate only the summary.
    # If enabled, the report will contain a table listing each test results file and the number of passed, failed, and skipped tests.
    # Detailed listing of test suites and test cases will be skipped.
    only-summary: 'false'

    # Limits which test suites are listed:
    #   all
    #   failed
    list-suites: 'all'

    # Limits which test cases are listed:
    #   all
    #   failed
    #   none
    list-tests: 'all'

    # Limits number of created annotations with error message and stack trace captured during test execution.
    # Must be less or equal to 50.
    max-annotations: '10'

    # Set action as failed if test report contains any failed test
    fail-on-error: 'true'

    # Set this action as failed if no test results were found
    fail-on-empty: 'true'

    # Relative path under $GITHUB_WORKSPACE where the repository was checked out.
    working-directory: ''

    # Personal access token used to interact with Github API
    # Default: ${{ github.token }}
    token: ''

Output parameters

Name Description
conclusion success or failure
passed Count of passed tests
failed Count of failed tests
skipped Count of skipped tests
time Test execution time [ms]
url Check run URL
url_html Check run URL HTML

Supported formats

dart-json

Test run must be configured to use JSON reporter. You can configure it in dart_test.yaml:

file_reporters:
  json: reports/test-results.json

Or with CLI arguments:

dart test --file-reporter="json:test-results.json"

For more information see:

dotnet-trx

Test execution must be configured to produce Visual Studio Test Results files (TRX). To get test results in TRX format you can execute your tests with CLI arguments:

dotnet test --logger "trx;LogFileName=test-results.trx"

Or you can configure TRX test output in *.csproj or Directory.Build.props:

<PropertyGroup>
  <VSTestLogger>trx%3bLogFileName=$(MSBuildProjectName).trx</VSTestLogger>
  <VSTestResultsDirectory>$(MSBuildThisFileDirectory)/TestResults/$(TargetFramework)</VSTestResultsDirectory>
</PropertyGroup>

Supported testing frameworks:

For more information see dotnet test

flutter-json

Test run must be configured to use JSON reporter. You can configure it in dart_test.yaml:

file_reporters:
  json: reports/test-results.json

Or with (undocumented) CLI argument:

flutter test --machine > test-results.json

According to documentation dart_test.yaml should be at the root of the package, next to the package's pubspec. On current stable and beta channels it doesn't work, and you have to put dart_test.yaml inside your test folder. On dev channel, it's already fixed.

For more information see:

java-junit (Experimental)

Support for JUnit XML is experimental - should work but it was not extensively tested. To have code annotations working properly, it's required your directory structure matches the package name. This is due to the fact Java stack traces don't contain a full path to the source file. Some heuristic was necessary to figure out the mapping between the line in the stack trace and an actual source file.

jest-junit

JEST testing framework support requires the usage of jest-junit reporter. It will create test results in Junit XML format which can be then processed by this action. You can use the following example configuration in package.json:

"scripts": {
  "test": "jest --ci --reporters=default --reporters=jest-junit"
},
"devDependencies": {
  "jest": "^26.5.3",
  "jest-junit": "^12.0.0"
},
"jest-junit": {
  "outputDirectory": "reports",
  "outputName": "jest-junit.xml",
  "ancestorSeparator": "",
  "uniqueOutputName": "false",
  "suiteNameTemplate": "{filepath}",
  "classNameTemplate": "{classname}",
  "titleTemplate": "{title}"
}

Configuration of uniqueOutputName, suiteNameTemplate, classNameTemplate, titleTemplate is important for proper visualization of test results.

mocha-json

Mocha testing framework support requires:

  • Mocha version v7.2.0 or higher
  • Usage of json reporter.

You can use the following example configuration in package.json:

"scripts": {
  "test": "mocha --reporter json > test-results.json"
}

Test processing might fail if any of your tests write anything on standard output. Mocha, unfortunately, doesn't have the option to store json output directly to the file, and we have to rely on redirecting its standard output. There is a work in progress to fix it: mocha#4607

swift-xunit (Experimental)

Support for Swift test results in xUnit format is experimental - should work but it was not extensively tested.

GitHub limitations

Unfortunately, there are some known issues and limitations caused by GitHub API:

  • Test report (i.e. Check Run summary) is markdown text. No custom styling or HTML is possible.
  • Maximum report size is 65535 bytes. Input parameters list-suites and list-tests will be automatically adjusted if max size is exceeded.
  • Test report can't reference any additional files (e.g. screenshots). You can use actions/upload-artifact@v3 to upload them and inspect them manually.
  • Check Runs are created for specific commit SHA. It's not possible to specify under which workflow test report should belong if more workflows are running for the same SHA. Thanks to this GitHub "feature" it's possible your test report will appear in an unexpected place in GitHub UI. For more information, see #67.

See also

  • paths-filter - Conditionally run actions based on files modified by PR, feature branch, or pushed commits

License

The scripts and documentation in this project are released under the MIT License

test-reporter's People

Contributors

abelbraaksma avatar atsu85 avatar cazou avatar dependabot[bot] avatar dharmendrasha avatar dorny avatar gdams avatar haudren-woven avatar ianmoroney avatar j-catania avatar jozefizso avatar ldaneliukas avatar luisito666 avatar micha-one avatar oscarmampel avatar petrdvorak avatar rvdlaarschot avatar ryancasburn-kai avatar shazwazza avatar tangowithfoxtrot avatar tomerfi avatar trond-snekvik avatar turnrdev avatar vasanthdharmaraj avatar wingyplus avatar workgroupengineering avatar yeikel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

test-reporter's Issues

Validation Error "summary exceeds a maximum bytesize of 65535"

Hi.

Thank you for test-reporter.

Configuration:

      - name: Publish Unit Test Results
        uses: dorny/[email protected]
        if: always()
        with:
          name: Tests
          path: TestResults.trx
          reporter: dotnet-trx
          max-annotations: 50

Result:

<snipped>
Creating report summary
Generating check run summary
Creating annotations
Creating check run 'Tests' with conclusion 'failure'
##[error]Validation Failed: {"resource":"CheckRun","code":"custom","field":"summary","message":"summary exceeds a maximum bytesize of 65535"}

Full step log:
15_Publish Unit Test Results.txt

Any ideas?

Is the trx too big?
Can I increase the maximum allowed size?

Thank you.

fatal: Not a git repository

I'm not sure why but I'm getting this error.

image

My build script is a bit long so copying and pasting it all of here seems uneeded.

name: Tests

on:
  - push
  - pull_request
jobs:  
  test:
    runs-on: ubuntu-latest
    container: node:10.18-jessie
    strategy:
      matrix:
        dotnet-version:
          - 2.1.x
    services:
      postgres:
        image: postgis/postgis:10-2.5
        env:
          POSTGRES_PASSWORD: '111+++'
          POSTGRES_DB: sigparser        
          POSTGRES_USER: postgres
        ports:
          - 5432:5432
        options: >-
          --health-cmd pg_isready
          --health-interval 10s
          --health-timeout 5s
          --health-retries 5
    env:
      ...
    steps:
      - uses: actions/checkout@v2
      - name: 'Setup .NET Core SDK ${{ matrix.dotnet-version }}'
        uses: actions/[email protected]
        with:
          dotnet-version: '${{ matrix.dotnet-version }}'
      - name: Install dependencies
        run: dotnet restore
      - name: Run Migrations
        run: > 
          dotnet run 
          --project ./DBUpgrader/DBUpgrader.csproj 
          -- migrate  
          --recreate true
          --fillconfigkeytable true 
          --trycreateifnotexists true 
      - name: What directory are we in?
        run: pwd
      - name: Test
        run: dotnet test --logger "trx;LogFileName=test-results.trx"
      - name: Publish Unit Test Results
        uses: dorny/[email protected]
        if: always()
        with:
          name: Test Results
          path: ./**/TestResults/*.trx
          reporter: 'dotnet-trx'
  

No report available

I have a strange issue that surface that last couple of days.

Everything seems to be ok, and I can see that the report has been generated and have a "Check run HTML" url in my action output. When I click that url I get a 404, and the report isn't available in the sidebar menu either.

Is this something that more people has experienced? Is it because a branch might have been deleted or why might this be?

Fails with "Error: Resource not accessible by integration"

When I try to publish my mocha-json results I get this error. No clues as to the cause in the logs though.

Run dorny/test-reporter@v1
  with:
    name: UI Unit Test Results Linux-ubuntu1804-x86_64
    path: common/apps/ui/**/ui-test-results.json
    reporter: mocha-json
    fail-on-error: false
    path-replace-backslashes: false
    list-suites: all
    list-tests: all
    max-annotations: 10
    only-summary: false
    token: ***
  env:
    VERSION: 9.0.1-419-g103764bM
    PLAT: Linux-ubuntu1804
    ARCH: x86_64
Check runs will be created with SHA=103764b486215b834c933ce803b46b29fbd1a276
Listing all files tracked by git
Found 10948 files tracked by GitHub
Using test report parser 'mocha-json'
Creating test report UI Unit Test Results Linux-ubuntu1804-x86_64
  Processing test results from common/apps/ui/ui-test-results.json
  Creating check run UI Unit Test Results Linux-ubuntu1804-x86_64
Error: Resource not accessible by integration

Enable PAT for authentication

Trying to run test-reporter locally using nectos/act with a personal access token results in:

| Creating check run Tests
[Test/run_tests]   ❓  ::endgroup::
[Test/run_tests]   ❗  ::error::You must authenticate via a GitHub App.
[Test/run_tests]   ❌  Failure - Test report

I imagine this is because creating the check run necessarily requires access to a Repo. I wonder if there could be a dry-run flag which would cause the action to parse the report, but refrain from actually creating the check run. This would allow the pipeline to continue.

Can't find the artifact

I always can't get the error mention can't find the artifact
here is my ci.yml

name: 'CI'

on:
  pull_request:
jobs:
  build-test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - run: npm ci
      - run: npm run test
      - run: npm run test4
      - uses: actions/upload-artifact@v2
        if: success() || failure()
        with:
          name: daily-test-results
          path: 'daily-test-results.json'

dailytest.json

name: Node.js CI

on:
  push:
    branches: [ master ]
  pull_request:
    branches: [ master ]
  workflow_run:
    workflows: ['CI']
    types:
      - completed
jobs:
  build-test:
    name: Build and Test
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - run: npm ci
      - run: npm run test
      - run: npm run test4

      - name: Upload test results
        if: success() || failure()
        uses: actions/upload-artifact@v2
        with:
          name: daily-test-results
          path: daily-test-results.json

      - name: Daily tests report
        uses: dorny/test-reporter@v1
        if: success() || failure()
        with:
          artifact: daily-test-results
          name: daily-test-results
          path: 'artifacts\daily-test-results.json'
          reporter: mocha-json
          only-summary: 'false'
          list-suites: 'all'
          list-tests: 'all'

I need help hand on this issue

Include Test Result Files in the report from TRX files

<ResultFiles> from TRX files are not included in the report.

I've attached a sample TRX file from a set of UI tests where a screenshot is taken and attached in case of an error for analysis:

<ResultFiles>
  <ResultFile path="fv-az28-358\error_waitingfeaturetests_waitforthescreentobeenabled_20210216_100645.html" />
  <ResultFile path="fv-az28-358\error_waitingfeaturetests_waitforthescreentobeenabled_20210216_100645.jpg" />
</ResultFiles>

Is it possible to include those as part of the report?

Thank you.

TestResults.zip

Report panel includes scroll bar

I'm not sure if this is something under your control but I've found that my generated report always shows a horizontal scroll bar. As a result I can't see the complete set of columns off to the right even though there appears to be enough available space.

image

I think I've found the style that causes this. If I disable this in the chrome debugger then the scroll bar vanishes and the whole report displays nicely.

image

The first column that shows the xml test call file is already wrapping so I wondered if there was a way to avoid the horizontal scroll bar?

Issues/advice for trying to create a test report for SQL Tests

I am using RedGate tools for running SQL tests in Github and I would love to create a report for this with this tool, but it seems no matter how I run it there seems to be some kind or error.
The Redgate tools allow me to Export results to JUnit or MSTest.
So far I have tried what seems to be all of the ways and I get these errors:

RedGate test results format: JUnit
reporter: jest-junit
Error: stackTrace.split is not a function

RedGate test results format: JUnit
reporter: java-junit
Error: Cannot read property 'split' of undefined

RegGate test results format: MSTest
Reporter: dotnet-trx
Error: Cannot read property '0' of undefined

Java JUnit support

Tracking issue for JUnit support.

Experimental support was added in v1.3.0 with java-junit reporter.
Implementation is based on test result files taken from lhotari/pulsar.

Due to lack of my own experience/interest with Java ecosystem no java project to create test fixtures was added. There is also no documentation how to use JUnit to get XML with results - I expect there is plenty of other resources about this topic.

@lhotari
Here you can see how it looks: https://github.com/dorny/pulsar/runs/2052225393?check_suite_focus=true
My setup is based on your lh-refactor-pulsar-ci-with-retries branch.
By purpose I made one test fail to verify failure annotations are working.
I also modified CI workflow to upload merged-test-report.xml instead of XML per test class.
It works both ways but with XML per test class approach you get more noise in the report.
I've also noticed there are some duplicates.
Multiple test results with same name are also present in XML so it's probably not a bug. Still I'm not sure why is that and If I should handle it in some specific way.
Please keep me updated how it works for you.

TRX file with no test cases throws an error

I am using the following configuration:

#   ...checkout, restore, build, etc.

    - name: Test
      run: dotnet test --no-build --verbosity normal --logger "trx;LogFileName=test-results.trx"
    - name: Test Result
      uses: dorny/test-reporter@v1
      if: always()
      with:
        name: .NET Tests
        path: '**/*.trx'
        reporter: dotnet-trx

One of my projects has no tests and produces a TRX file without Test definitions:

<?xml version="1.0" encoding="utf-8"?>
<TestRun id="80e4c095-f726-4ab2-9441-416daa162672" name="..." runUser="..." xmlns="http://microsoft.com/schemas/VisualStudio/TeamTest/2010">
  <Times creation="2021-02-26T10:36:33.7131022+02:00" queuing="2021-02-26T10:36:33.7131029+02:00" start="2021-02-26T10:36:33.3278956+02:00" finish="2021-02-26T10:36:33.7139830+02:00" />
  <TestSettings name="default" id="863a1d8b-ee3b-45f9-86ee-1869bc4e889f">
    <Deployment runDeploymentRoot="..." />
  </TestSettings>
  <TestLists>
    <TestList name="Results Not in a List" id="8c84fa94-04c1-424b-9868-57a2d4851a1d" />
    <TestList name="All Loaded Results" id="19431567-8539-422a-85d7-44ee4e166bda" />
  </TestLists>
  <ResultSummary outcome="Completed">
    <Counters total="0" executed="0" passed="0" failed="0" error="0" timeout="0" aborted="0" inconclusive="0" passedButRunAborted="0" notRunnable="0" notExecuted="0" disconnected="0" warning="0" completed="0" inProgress="0" pending="0" />
    <RunInfos>
      <RunInfo computerName="..." outcome="Warning" timestamp="2021-02-26T10:36:33.6676104+02:00">
        <Text>No test is available in (...). Make sure that test discoverer &amp; executors are registered and platform &amp; framework version settings are appropriate and try again.</Text>
      </RunInfo>
    </RunInfos>
  </ResultSummary>
</TestRun>

The following error is thrown:

Run dorny/test-reporter@v1
Check runs will be created with SHA=6b6aeb6092dc5c41605c5be7d7c3027dded62c59
Listing all files tracked by git
Found 37 files tracked by GitHub
Using test report parser 'dotnet-trx'
Creating test report .NET Tests
Error: trx.TestRun.TestDefinitions is not iterable

I changed the path and ignored that TRX file, and it all worked well.
From what I see the TRX file usually has Results, TestDefinitions and TestEntries elements inside the TestRun root element.

You can either check TestRun.ResultSummary.Counters if there are any tests, or just check if TestDefinitions is available before iterating it.
I would expect that the report will show the text under ResultSummary.RunInfos.RunInfo.Text.

Test report doesn't appear in expected place in GitHub UI

Hi,

thanks a lot for this Action 👍

We currently have multiple GitHub Actions Workflows (main.yaml , lint-yaml.yaml etc ).

When using the action inside of the main.yaml, the test results appear under the lint-yaml results 🤔

Not 100% sure what is causing it

image

our step, within main.yaml looks like

    - name: Report PR Test results
      uses: dorny/test-reporter@v1
      if: ${{ github.event_name == 'pull_request' && (success() || failure()) }}
      with:
        name: jest Tests
        path: reports/jest-*.xml
        reporter: jest-junit

Here is a (sanitized) output of the step :

 1s
Run dorny/test-reporter@v1
  with:
    name: jest Tests
    path: reports/jest-*.xml
    reporter: jest-junit
    list-suites: all
    list-tests: all
    max-annotations: 10
    fail-on-error: true
    token: ***
  env:
     // snipped
Action was triggered by pull_request: using SHA from head of source branch
Check runs will be created with SHA=8d1d74409581269df26b4df491b55819597d9501
Listing all files tracked by git
  /usr/bin/git ls-files -z
  // ...
Found 190 files tracked by GitHub
Using test report parser 'jest-junit'
Creating test report jest Tests
  Processing test results from reports/jest-junit.xml
  Creating report summary
  Generating check run summary
  Creating annotations
  Creating check run with conclusion success
  Check run create response: 201
  Check run URL: https://api.github.com/repos/XXXX/YYYY/check-runs/2005347522
  Check run HTML: https://github.com/XXXX/YYYY/runs/2005347522

Consider adding coverage reports (e.g. for lcov format)

Hey there,

first, thanks for your work, this github action is awesome!

Nevertheless, it would be even more awesome, if one could see code coverage information next to the test reports. Maybe an additional property coverage-files could be added (see this one as an example), which takes lcov files as an input and displays the information if the property is set? What are your thoughts about that?

Thanks in advance.
Best regards,
Alex

Junit testsuites attributes

Would it be possible to have the attributes on the testsuites element optional? I am trying to get wdio junit reporter to work with the jest-junit functionality and the wdio reporter doesn't provide any attributes on the testsuites element.

Add enhancement to show results in Check summary

Is there anyway we can add "- # tests run, # passed, # failed, 3 skipped" in the check summary that appears on the check result for the pull request? Currently just the icon is displayed.

Maybe have an input of summary: icon || results

Similar to this image:
image

Could be a simple change, I believe when you call the checks.update api.

const resp = await this.octokit.checks.update({
check_run_id: createResp.data.id,
conclusion,
status: 'completed',
output: {
title: ${name} ${icon},
summary,
annotations
},
...github.context.repo
})

Support very large test results

It looks like I used and reported this issue on a friendly fork of this project. Since the code is still almost identical, reporting here.

My project produces millions of test executions over a 2 hour build. I was hoping to use this action for a summary of the results, which would be a lightweight as just the counts. Unfortunately the action failed with an out of memory error on a 2gb heap. Can the results be processed in a streaming fashion rather than read fully into memory, which avoids having a limit?

Disabling git ls-files

We have a repo with over 25k files and the output of "git ls-files", possibly from dist/index.js call to listFiles(), adds significant noise to our logs. Is there a way to disable this feature?

Action failing to annotate on diff?

This action is being used a lot in ppy/osu, and is very critical as the annotations provides contributors and developers all the essential information without trawling through the summary immediately. However, it seems to not work apparently when opening a file in diff view. Is the action supposed to annotate this proactively without a linked PR?

Fetching list of tracked files from GitHub fails with error "API rate limit exceeded for installation ID"

Problem

I'm working on refactoring the existing Apache Pulsar (apache/pulsar) build in my fork lhotari/pulsar.

This is the way how I'm using test-reporter:
https://github.com/lhotari/pulsar/blob/4c13ef2909aeb1de1b0354c07710fe8257a3ccb1/.github/workflows/pulsar-ci-test-report.yaml#L20-L39

However this fails with this type of errors

Action was triggered by workflow_run: using SHA and RUN_ID from triggering workflow
Check runs will be created with SHA=ef23acb30548a37de648cc89965f854668975e95
Fetching list of tracked files from GitHub
Error: API rate limit exceeded for installation ID 6407194.

failed workflow run.

Full logs
2021-03-05T09:11:25.6157659Z ##[section]Starting: Request a runner to run this job
2021-03-05T09:11:25.9270299Z Can't find any online and idle self-hosted runner in current repository that matches the required labels: 'ubuntu-20.04'
2021-03-05T09:11:25.9270420Z Can't find any online and idle self-hosted runner in current repository's account/organization that matches the required labels: 'ubuntu-20.04'
2021-03-05T09:11:25.9270817Z Found online and idle hosted runner in current repository's account/organization that matches the required labels: 'ubuntu-20.04'
2021-03-05T09:11:26.1074475Z ##[section]Finishing: Request a runner to run this job
2021-03-05T09:11:32.0681059Z Current runner version: '2.277.1'
2021-03-05T09:11:32.0716620Z ##[group]Operating System
2021-03-05T09:11:32.0717668Z Ubuntu
2021-03-05T09:11:32.0718135Z 20.04.2
2021-03-05T09:11:32.0718585Z LTS
2021-03-05T09:11:32.0719158Z ##[endgroup]
2021-03-05T09:11:32.0719747Z ##[group]Virtual Environment
2021-03-05T09:11:32.0720491Z Environment: ubuntu-20.04
2021-03-05T09:11:32.0721098Z Version: 20210302.0
2021-03-05T09:11:32.0722229Z Included Software: https://github.com/actions/virtual-environments/blob/ubuntu20/20210302.0/images/linux/Ubuntu2004-README.md
2021-03-05T09:11:32.0723689Z Image Release: https://github.com/actions/virtual-environments/releases/tag/ubuntu20%2F
2021-03-05T09:11:32.0724686Z ##[endgroup]
2021-03-05T09:11:32.0726938Z ##[group]GITHUB_TOKEN Permissions
2021-03-05T09:11:32.0728255Z Actions: write
2021-03-05T09:11:32.0728785Z Checks: write
2021-03-05T09:11:32.0729399Z Contents: write
2021-03-05T09:11:32.0729971Z Deployments: write
2021-03-05T09:11:32.0730682Z Issues: write
2021-03-05T09:11:32.0731282Z Metadata: read
2021-03-05T09:11:32.0731999Z OrganizationPackages: write
2021-03-05T09:11:32.0732756Z Packages: write
2021-03-05T09:11:32.0733340Z PullRequests: write
2021-03-05T09:11:32.0734077Z RepositoryProjects: write
2021-03-05T09:11:32.0734765Z SecurityEvents: write
2021-03-05T09:11:32.0735356Z Statuses: write
2021-03-05T09:11:32.0736036Z ##[endgroup]
2021-03-05T09:11:32.0739249Z Prepare workflow directory
2021-03-05T09:11:32.1436660Z Prepare all required actions
2021-03-05T09:11:32.1450497Z Getting action download info
2021-03-05T09:11:32.5381111Z Download action repository 'dorny/test-reporter@v1'
2021-03-05T09:11:34.2772567Z ##[group]Run dorny/test-reporter@v1
2021-03-05T09:11:34.2773285Z with:
2021-03-05T09:11:34.2773817Z   artifact: /(.*)-test-report/
2021-03-05T09:11:34.2774344Z   name: $1 Tests
2021-03-05T09:11:34.2774808Z   path: *.xml
2021-03-05T09:11:34.2775370Z   reporter: jest-junit
2021-03-05T09:11:34.2775923Z   max-annotations: 50
2021-03-05T09:11:34.2776932Z   token: ***
2021-03-05T09:11:34.2777442Z   list-suites: all
2021-03-05T09:11:34.2777970Z   list-tests: all
2021-03-05T09:11:34.2778501Z   fail-on-error: true
2021-03-05T09:11:34.2779040Z ##[endgroup]
2021-03-05T09:11:35.6311663Z Action was triggered by workflow_run: using SHA and RUN_ID from triggering workflow
2021-03-05T09:11:35.6412819Z Check runs will be created with SHA=ef23acb30548a37de648cc89965f854668975e95
2021-03-05T09:11:35.6413983Z Fetching list of tracked files from GitHub
2021-03-05T09:11:35.6420550Z ##[error]API rate limit exceeded for installation ID 6407194.
2021-03-05T09:11:35.6470414Z Cleaning up orphan processes

It seems that test-reporter exceeds the GitHub API rate limits when using a large repository like apache/pulsar.

Expected behavior

test-reporter would use GitHub API in a way that it doesn't exceed the rate limiting.

Is there any workaround for this problem?

Other

I have reported a similar issue about paths-filter, dorny/paths-filter#73

Output the test report URL

In order to provide link to test report in follow-up jobs, it is desirable that the address found core.info(``Check run HTML: ${resp.data.html_url}``) would be also available as an output variable,

[dotnet-trx] Cannot read property 'match' of undefined

Some logs:

Run dorny/[email protected]
with:
name: test-results
reporter: dotnet-trx
path: TestResults/*.trx
list-suites: all
list-tests: all
max-annotations: 10
fail-on-error: true
token: ***
env:
NUGET: D:\Action-runner\intel-innersource\001_work_tool\nuget.exe\5.8.1\x64/nuget.exe
Action was triggered by pull_request: using SHA from head of source branch
Check runs will be created with SHA=30d3bbd87740e62cf5586edaa9a64fa0605ddf1c
Listing all files tracked by git
Found 964 files tracked by GitHub
Using test report parser 'dotnet-trx'
Creating test report test-results
Error: Cannot read property 'match' of undefined


*And this is the summary of the test result (since I know some Marketplace Actions need users to specify the upper bound of the number of passed/failed cases) : *

Test Run Failed.
Total tests: 139
Passed: 65
Failed: 71
Skipped: 3
Total time: 2.0720 Minutes


I was running an old project, and I know it uses a lot of things from pre-2015. So the reason may be that something is too old to be compatible with your stuffs. Let me know what you need from me, or can you repro the issue.

Thanks,

Junhao Zhang "Freddie"

want to filter on changes in `CHANGELOG.md` but also returns true when others `.md` files are changed

name: validate PR 
on:
  pull_request:
    branches: [ develop ]

jobs:
  lint-static_checks-test-build:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      packages: write

    steps:
      - name: check if mandatory files are changed
        uses: dorny/paths-filter@v2
        id: filter
        with:
          filters: |
            md:
              - 'CHANGELOG.md'

      - name: FAIL if mandatory files are not changed
        if: ${{ steps.filter.outputs.md == 'false' }}
        uses: actions/github-script@v3
        with:
          script: |
            core.setFailed('Mandatory markdown files were not changed')

my intent with this code is to fail the action if CHANGELOG.md is not changed for a PR. but I have 3 questions:

  1. For a PR, the action run whenever we push a commit to source branch. So if one particular commit had changelog.md changed, this would mean that all the subsequent commit push will automatically consider the file as changed and this filter will always return true? or is it made against every single commit, like if commit A changed CHANGELOG.md but commit B didn't, then would it fail for commit B?
  2. is this filter even targeting CHANGELOG.md or just md files in general?
  3. if the filter considers the total no. of file changes in the head commit, and not the previous commits, then would it fail, if i changed the CHANGELOG.md , then undo-ed the changes in that file?

BUG: test-reporter does not load all the artifacts

Results table has horizontal scrollbar

Hi we're trying out your test reporter for trx files, and the results table is quite thin with a horizontal scroll bar, see screenshot. I'd link you but it's a private repo.

We've got some pretty long test class names, so I imagine that's why. Is it possible to widen the table so there's no scrollbar?
image

Thanks in advance.

Using mochawesome

Hey,

Can I use my mochawesome.json file with this action? I tried using it with the mocha setting, but it doesn't work. I'm getting the error mocha.passes is not iterable for the following configuration:

      - name: Report E2E Results
        if: ${{ !cancelled() }}
        uses: dorny/test-reporter@v1
        with:
          name: E2E Tests Results
          path: test/e2e/report/mochawesome.json
          reporter: mocha-json

'list-tests: failed' still reporting successful tests.

I'm trying to process a junit that exceeds the max report size, so I'm setting 'list-tests: failed' , but the test report is still showing all of the successful tests.

Run dorny/test-reporter@v1
  with:
    name: Results -> Nightly
    path: **/digital/verilog_sims/regression/list_results_*.xml
    reporter: java-junit
    max-annotations: 50
    list-tests: failed
    path-replace-backslashes: false
    list-suites: all
    fail-on-error: true
    only-summary: false
    token: ***


Creating test report Results -> Nightly
  Processing test results from digital/verilog_sims/regression/list_results_all.xml
  Creating check run Results -> Nightly
  Creating report summary
  Generating check run summary
  Warning: Test report summary exceeded limit of 65535 bytes and will be trimmed
  Creating annotations
  Updating check run conclusion (failure) and output
  Check run create response: 200

Resulting report:
image

Exceeding the maximum annotations fails to produce a report

I'm using the test reporter to report test failures in workflows with test suites that often exceed 1000 tests, and it's somewhat common to have more than 50 tests fail in any given run. Per the specifications, the maximum number of detailed annotations permitted is 50 (with the default being 10), and it was my assumption that if the number of failed tests exceeded 50, 50 annotations would still be created and the remaining would simply be cut off.

However, what I'm seeing is that if the number of failed tests exceeds 50, the resulting report fails to generate and the test reporter produces an "Error: Invalid request" message.

Ideally, the maximum number of annotations would be raised beyond 50, however producing only 50 of the failing annotations would also be a sufficient, albeit less desirable solution.

Thank you!
Screen Shot 2021-10-06 at 9 14 50 AM

Cannot read property 'split' of undefined

I get ##[error]Cannot read property 'split' of undefined when parsing test results with the java-junit reporter.
I looked at the code of the action and I understand that the error comes from parsing the content of the file. Unfortunately, it's hard for me to pinpoint exactly where that's happening as there's no stacktrace and there's a huge try/catch in main.
I looked at the split calls used in the code, and I'd be tempted to say that the issue could be around parsing the stacktrace of the failing tests. However, the same test failures are happening in other platforms and there the action works just fine. I have this failure only in tests run on UWP.
This is the test run where the issue appears.
I'm using version 1.5.0 because of the nice addition of path-replace-backslashes. If you believe that this error could have been introduced by this latest version, I can try to convert the backslashes myself and try 1.4.x .

No file found generated by mocha-json

Here's my workflow looks like:

name: Eslint & Mocha test check

on:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]

jobs:
  build:
    runs-on: ubuntu-latest

    strategy:
      matrix:
        node-version: [12.x, 14.x, 16.x]

    defaults:
      run:
        working-directory: ./CAScript-node-js-runtime

    steps:
    - uses: actions/checkout@v2
    - run: npm ci
    - run: npm run build --if-present
    - run: npm test
    - name: Test Report
      uses: dorny/test-reporter@v1
      if: success() || failure()
      with:
        name: Mocha Tests
        path: test-results.json
        reporter: mocha-json

and test command: mocha -r ts-node/register test/**/*.ts --reporter json > test-results.json

Am I doing any wrong?

Unexpected failure when parsing test results from stable Dart SDK

Hi there!

I noticed today that the following line is possible in test-results.json within a Dart unit test results output:

{"testID":1,"messageType":"print","message":"Consider enabling the flag chain-stack-traces to receive more detailed exceptions.\nFor example, 'pub run test --chain-stack-traces'.","type":"print","time":65561}

The presence of this line - specifically - a line with "messageType":"print" causes this action to throw:

Error: Cannot read property 'print' of undefined

Screen Shot 2021-05-12 at 2 20 56 PM


In this case, I was able to overcome the issue by enabling the chain-stack-traces flag in my test command - but I would imagine that this same error will occur if a user places a print() statement within a test.

Fails to find files with mixed backward slashes in path

With these options set:

      - name: Publish Test Results
        uses: dorny/[email protected]
        with:
          name: AP Test Results win-x64 Release
          path: "${{runner.temp}}/Acoustics.Test_Results/**/*.trx"
          reporter: "dotnet-trx"

The action fails to find any files:

Run dorny/[email protected]
  with:
    name: AP Test Results win-x64 Release
    path: D:\a\_temp/Acoustics.Test_Results/**/*.trx
    reporter: dotnet-trx
    list-suites: all
    list-tests: all
    max-annotations: 10
    fail-on-error: true
  env: {}
Check runs will be created with SHA=784cf60c324629adef252f2338961c097772c804
::group::Listing all files tracked by git
Listing all files tracked by git
Found 1320 files tracked by GitHub
Using test report parser 'dotnet-trx'
::group::Creating test report AP Test Results win-x64 Release
Creating test report AP Test Results win-x64 Release
  Warning: No file matches path D:\a\_temp/Acoustics.Test_Results/**/*.trx
  ::endgroup::
::set-output name=conclusion::success
##[debug]='success'
::set-output name=passed::0
##[debug]='0'
::set-output name=failed::0
##[debug]='0'
::set-output name=skipped::0
##[debug]='0'
::set-output name=time::0
##[debug]='0'
Error: No test report files were found
##[debug]Node Action run completed with exit code 1
##[debug]Finishing: Publish Test Results

The issues is that the GitHub runner.temp context variable contains \ slashes in the path.

Creating a safe version of runner.temp in a previous workflow step allows me to bypass the problem:

      - name: Calculate variables
        id: calc_vars
        shell: pwsh
        run: |
          $safe_temp = "${{ runner.temp }}" -replace "\\","/"
          echo "::set-output name=SAFE_TEMP::$safe_temp"

With that, this will work:

      - name: Publish Test Results
        uses: dorny/[email protected]
        with:
          name: AP Test Results win-x64 Release
          path: "${{  steps.calc_vars.outputs.SAFE_TEMP }}/Acoustics.Test_Results/**/*.trx"
          reporter: "dotnet-trx"

I think the path just needs to be normalized before you pass it to the glob function.

No artifacts found in run

Attempting to use this to get my test results posted - but seeing this warning
image

despite my artifacts existing?

image

Heres part of my yaml

      - uses: actions/upload-artifact@v2
        if: success() || failure()
        with:
          name: binaries
          path: artifacts/bin
      - uses: actions/upload-artifact@v2
        if: success() || failure()
        with:
          name: test-reports
          path: artifacts/test-reports
      - uses: actions/upload-artifact@v2
        with:
          name: nugets
          path: artifacts/nuget

      - uses: dorny/test-reporter@v1
        if: success() || failure()
        with:
          artifact: test-reports
          name: Tests
          path: "*.trx"
          reporter: dotnet-trx
          token: ${{ secrets.REPORT_TOKEN }}

should I add a sleep or something before trying to reporter?

Confused what might be going on here - Ive been playing with it here but just cant get it working

Adhere to defaults.run working-directory?

Should this adhere to the defaults.run working-directory and prepend?

defaults:
  run:
    working-directory: ./src/

With

      - name: Test Report
        uses: dorny/test-reporter@v1
        if: success() || failure()    # run this step even if previous step failed
        with:
          name: JEST Tests            # Name of the check run which will be created
          path: reports/jest-*.xml    # Path to test results
          reporter: jest-junit        # Format of test results

path becomes ./src/reports/jest-*.xml

Invalid format: "00:00:18" is not NET duration

Hi, I've noticed that on some special occasions the dotnet trx file will output a duration that isn't parsed by this regex:

const durationRe = /^(\d\d):(\d\d):(\d\d\.\d+)$/

I'm unsure under what circumstances this duration ends up being different from the others but seems its possible to end up with a format other than the typical 00:00:00.0120000 style. Below is the xml fragment causing the issue:

<UnitTestResult executionId="a35d36d2-be5e-4df1-97bf-edcf5bb32fbe" testId="c09012a9-dca9-30d9-25de-f0be315359ca" testName="Index_Read_And_Write_Ensure_No_Errors_In_Async(2000,5000,20,50,100,50,True)" computerName="fv-az28-465" duration="00:00:18" startTime="2021-03-22T23:52:27.0000000+00:00" endTime="2021-03-22T23:52:45.0000000+00:00" testType="13cdc9d9-ddb5-4fa4-a97d-d965ccfc6d4b" outcome="Passed" testListId="8c84fa94-04c1-424b-9868-57a2d4851a1d" relativeResultsDirectory="a35d36d2-be5e-4df1-97bf-edcf5bb32fbe">

where duration="00:00:18" so seems like the milliseconds are optional.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.