Giter Club home page Giter Club logo

publish-unit-test-result-action's People

Contributors

adriandsg avatar airadier avatar ali-raza-arain avatar audricschiltknecht avatar cpdeethree avatar danxmoran avatar dependabot[bot] avatar efaulhaber avatar enricomi avatar ilent2 avatar jgiannuzzi avatar justanotherdev avatar ktasper avatar lachaib avatar mas-wtag avatar mathroule avatar mightyguava avatar mpv avatar ofek avatar pavel-spacil avatar rafikfarhad avatar sorekz avatar szepeviktor avatar tomerfi avatar turnrdev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

publish-unit-test-result-action's Issues

Documentation on token

Sorry to bother with a so stupid question, but what are the expected permissions for the GITHUB_TOKEN?
I registered a new token with repo and workflow, but it does not seem to do the job:

The token seems to be used:

Run EnricoMi/[email protected]
  with:
    github_token: ***
    files: build_report.xml
    check_name: Unit Test Results
    hide_comments: all but latest
    comment_on_pr: true
    log_level: INFO

but GitHub complains with

github.GithubException.GithubException: 403 {"message": "You must authenticate via a GitHub App.", "documentation_url": "https://docs.github.com/rest/reference/checks#create-a-check-run"} 

Should -I- register this app myself? I'm a bit lost.
Thanks.

Random failure with `github.GithubException.GithubException: 500 null`

I recently observed this random failure. It seems completely unrelated to the actual commit that the workflow was triggered for, as the workflow had previously executed without an error for the same commit.

Not sure where the actual error is, but here's the stacktrace:

Run EnricoMi/publish-unit-test-result-action@v1
  with:
    files: firmware/tests/build/bin/**/*.xml
    github_token: ***
    check_name: Unit Test Results
    fail_on: test failures
    hide_comments: all but latest
    comment_on_pr: true
    pull_request_build: merge
    check_run_annotations: all tests, skipped tests
/usr/bin/docker run --name ghcrioenricomipublishunittestresultactionv111_09e53a --label 5588e4 --workdir /github/workspace --rm -e INPUT_FILES -e INPUT_GITHUB_TOKEN -e INPUT_COMMIT -e INPUT_CHECK_NAME -e INPUT_COMMENT_TITLE -e INPUT_FAIL_ON -e INPUT_REPORT_INDIVIDUAL_RUNS -e INPUT_DEDUPLICATE_CLASSES_BY_FILE_NAME -e INPUT_HIDE_COMMENTS -e INPUT_COMMENT_ON_PR -e INPUT_PULL_REQUEST_BUILD -e INPUT_TEST_CHANGES_LIMIT -e INPUT_CHECK_RUN_ANNOTATIONS -e INPUT_CHECK_RUN_ANNOTATIONS_BRANCH -e HOME -e GITHUB_JOB -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_REPOSITORY_OWNER -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RETENTION_DAYS -e GITHUB_ACTOR -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_SERVER_URL -e GITHUB_API_URL -e GITHUB_GRAPHQL_URL -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e GITHUB_ACTION_REPOSITORY -e GITHUB_ACTION_REF -e GITHUB_PATH -e GITHUB_ENV -e RUNNER_OS -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -e ACTIONS_RUNTIME_URL -e ACTIONS_RUNTIME_TOKEN -e ACTIONS_CACHE_URL -e GITHUB_ACTIONS=true -e CI=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/_temp/_runner_file_commands":"/github/file_commands" -v "/home/runner/work/Wunderkiste/Wunderkiste":"/github/workspace" ghcr.io/enricomi/publish-unit-test-result-action:v1.11
2021-04-10 07:50:41 +0000 - publish-unit-test-results -  INFO - reading firmware/tests/build/bin/**/*.xml
2021-04-10 07:50:41 +0000 - publish.publisher -  INFO - publishing success results for commit 6cee9c9b0d9b3fd0f7758337e79225cf24db0f21
2021-04-10 07:50:41 +0000 - publish.publisher -  INFO - creating check
2021-04-10 07:50:43 +0000 - publish.publisher -  INFO - creating comment
Traceback (most recent call last):
  File "/action/publish_unit_test_results.py", line 194, in <module>
    main(settings)
  File "/action/publish_unit_test_results.py", line 68, in main
    Publisher(settings, gh, gha).publish(stats, results.case_results, conclusion)
  File "/action/publish/publisher.py", line 55, in publish
    self.publish_comment(self._settings.comment_title, stats, pull, check_run, cases)
  File "/action/publish/publisher.py", line 244, in publish_comment
    pull_request.create_issue_comment(f'## {title}\n{summary}')
  File "/usr/local/lib/python3.6/site-packages/github/PullRequest.py", line 457, in create_issue_comment
    "POST", self.issue_url + "/comments", input=post_parameters
  File "/usr/local/lib/python3.6/site-packages/github/Requester.py", line 317, in requestJsonAndCheck
    verb, url, parameters, headers, input, self.__customConnection(url)
  File "/usr/local/lib/python3.6/site-packages/github/Requester.py", line 340, in __check
    raise self.__createException(status, responseHeaders, output)
github.GithubException.GithubException: 500 null

Here's a link to the specific workflow run

Any ideas what the cause may be?

Comparing against the wrong base (master) commit

The action compares a commit of a branch to the where it branched off master. Github seems to merge this commit with master head, so unit tests include master head and should compared against that (at least in pull_request_target). This has happened here. Make sure to compare to the master commit that is part of the merge, not master head, as master could have moved further while the action is running.

What if that commit cannot be merged into master? On which commit does GitHub actions run then?

Limit check run fields size

Github API limits the size of some API call JSON fields, especially when creating check runs:
https://developer.github.com/v3/checks/runs/#annotations-object-1

Annotation:

  • message: 64 KB
  • title: 255 characters
  • raw_details: 64 KB

Limit these fields by abbreviating the strings with ellipsis in the middle of the string. Make sure the title cannot get larger than 255 characters without abbreviating it.

Limiting a string by bytes is tricky as some characters may use multiple bytes. Easiest would be to limit it to 16k characters, which should be safe in terms of bytes in any situation.

Combining trigger pull_request and workflow_run

Hi, as per GitHub recommendations, it's dangerous to use pull_request_target, they recommend using pull_request with workflow_run.

I have tried to do this with this GitHub Action here and here are the logs:

/usr/bin/docker run --name e400d0fdeb040d443bb87865509f8e811d_620554 --label 5588e4 --workdir /github/workspace --rm -e INPUT_CHECK_NAME -e INPUT_GITHUB_TOKEN -e INPUT_FILES -e INPUT_REPORT_INDIVIDUAL_RUNS -e INPUT_CHECK_RUN_ANNOTATIONS -e INPUT_COMMIT -e INPUT_COMMENT_TITLE -e INPUT_DEDUPLICATE_CLASSES_BY_FILE_NAME -e INPUT_HIDE_COMMENTS -e INPUT_COMMENT_ON_PR -e INPUT_TEST_CHANGES_LIMIT -e INPUT_CHECK_RUN_ANNOTATIONS_BRANCH -e INPUT_LOG_LEVEL -e HOME -e GITHUB_JOB -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_REPOSITORY_OWNER -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RETENTION_DAYS -e GITHUB_ACTOR -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_SERVER_URL -e GITHUB_API_URL -e GITHUB_GRAPHQL_URL -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e GITHUB_ACTION_REPOSITORY -e GITHUB_ACTION_REF -e GITHUB_PATH -e GITHUB_ENV -e RUNNER_OS -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -e ACTIONS_RUNTIME_URL -e ACTIONS_RUNTIME_TOKEN -e ACTIONS_CACHE_URL -e GITHUB_ACTIONS=true -e CI=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/_temp/_runner_file_commands":"/github/file_commands" -v "/home/runner/work/nHapi/nHapi":"/github/workspace" 5588e4:00d0fdeb040d443bb87865509f8e811d
2021-02-10 20:52:10 +0000 - publish-unit-test-results -  INFO - reading **/*.xml
2021-02-10 20:52:10 +0000 - publish.publisher -  INFO - publishing success results for commit b1e94751b354cc4fbd75e4bb282eca419f6555d4
2021-02-10 20:52:10 +0000 - publish.publisher -  INFO - creating check
2021-02-10 20:52:11 +0000 - publish.publisher -  INFO - there is no pull request for commit b1e94751b354cc4fbd75e4bb282eca419f6555d4

what am I missing to ensure it will comment on the forked PR?

xUnit tests outputted in JUnit format display only as runs and not tests

Is this expected behaviour or am I potentially doing something wrong?

  - name: Publish Unit Test Results
    uses: EnricoMi/publish-unit-test-result-action@v1
    if: always()
    with:
      github_token: ${{ secrets.GITHUB_TOKEN }}
      check_name: Unit Test Results
      check_run_annotations: all tests, skipped tests
      comment_on_pr: true
      test_changes_limit: 5
      hide_comments: all but latest
      files: Release/net472/virtual_out.xml
      report_individual_runs: true

Hide older comments by default

In #28 we have discussed that it would be great to have the action by default hide earlier comments so that only the latest comment created by the action is visible. This avoids filling up the PR with old test results.

Deleting test results leaves a message that a comment has been removed, which looks pretty similar to the hidden comment. With hidden comments, user have a chance to look at older test results.

The action currently hides all comments that refer to commits that are no longer part of the branch / PR due to commit history rewrite. This can be extended to all but the latest comment.

This default behaviour should be controlled by configuration.

Why this action cannot be executed on pull_request event ?

I would like to execute this action only on pull request action why you by pass this event ?

2020-09-23 17:48:11 +0000 - publish-unit-test-results - DEBUG - action triggered by 'pull_request' event
9
2020-09-23 17:48:11 +0000 - publish-unit-test-results - WARNING - event '{}' is not supported

junit xsd doesn't require a failure body => GH checks api => properties/raw_details', nil is not a string.

It would appear the body of the test failure is being supplied to GH's checks api via a required property. However, a failure body may not always be present.

2020-11-03 17:36:53 +0000 - publish-unit-test-results -  INFO - creating check
Traceback (most recent call last):
  File "/action/publish_unit_test_results.py", line 801, in <module>
    main(token, event, repo, commit, files, check_name, report_individual_runs, dedup_classes_by_file_name)
  File "/action/publish_unit_test_results.py", line 754, in main
    publish(token, event, repo, commit, stats, results['case_results'], check_name, report_individual_runs)
  File "/action/publish_unit_test_results.py", line 722, in publish
    publish_check(stats, cases)
  File "/action/publish_unit_test_results.py", line 613, in publish_check
    repo.create_check_run(name=check_name, head_sha=commit_sha, status='completed', conclusion='success', output=output)
  File "/action/githubext/Repository.py", line 78, in create_check_run
    headers={'Accept': 'application/vnd.github.antiope-preview+json'},
  File "/usr/local/lib/python3.6/site-packages/github/Requester.py", line 319, in requestJsonAndCheck
    verb, url, parameters, headers, input, self.__customConnection(url)
  File "/usr/local/lib/python3.6/site-packages/github/Requester.py", line 342, in __check
    raise self.__createException(status, responseHeaders, output)
github.GithubException.GithubException: 422 {"message": "Invalid request.\n\nFor 'properties/raw_details', nil is not a string.\nFor 'properties/raw_details', nil is not a string.", "documentation_url": "https://docs.github.com/rest/reference/checks#create-a-check-run"}

Status message on Github Action Summary Page not correct

Hello, I am using karma tests for my project with junit reporter to create .xml reports.
When I am parsing this xml with this GitHub actions all looks pretty cool except this Summary Page. For some reason I can see same name for all failed test.
image

I've found this topic: #34 but didn't find any way to solve the problem on karma side :(

Also I've tried some others actions: scacap/action-surefire-report@v1, ashley-taylor/[email protected] and all of them managed to get this Summary page looks good.

So probably problem somewhere in this action.

Would appreciate any help.

Thanks

Publishes to random workflow

When there are multiple GitHub workflows for one commit, the action publishes the commit status to a random workflow. It should be the workflow that contains the action. This is due to GitHub's API not allowing to specify which workflow to add the check run to.

Both REST and GraphQL APIs do not allow to specify the workflow:

The github.run_id workflow variable provides the check run id where the action runs.

Update 2022-05-14:
With GitHub introducing "job summary", the action now additionally publishes the results at the summary page of the workflow that runs the publish action. From there, a link to the check annotations is provided if failures exists: https://github.com/EnricoMi/publish-unit-test-result-action/tree/v1.35#github-actions-job-summary. However, this is not a proper fix for this issue.

Only Github can fix this issue. Please see this discussion: https://github.com/orgs/community/discussions/24616

Checks can be disabled with check_run: false.

Make action work on all OS

This action is only supported on Linux since its container based. Is it possible to support multiple OS for use with self-hosted runners like internal Macs.

Provide optional monospace PR comment layout

In #26 we have discussed how a monospace comment could look like. Here is a suggestion:

521 suites (-60) in 4h 24m 19s (-5m 23s)
508 tests   (+1)
482 success (+1)
 26 skipped (±0)
  0 failed  (±0)
9 876 runs    (-1 190)
7 873 success (-  896)
2 003 skipped (-  294)
    0 failed  (±    0)
results for commit e826078 ± comparison against base commit 3fefb1a

and without runs:

521 suites (-60) in 4h 24m 19s (-5m 23s)
508 tests   (+1)
482 success (+1)
 26 skipped (±0)
  0 failed  (±0)
results for commit e826078
± comparison against base commit 3fefb1a

Implement that as an optional comment layout.

Move PR comment details into collapsible section

GitHub Markdown supports collapsible sections: https://gist.github.com/pierrejoubert73/902cc94d79424356a8d20be2b382e1ab.

The lists of removed and skipped tests should go in there. With that, all four available lists could be added and list limits can be increased as comments do not get scattered if lists are hidden initially.

The markdown in the summary only works when separated from the the <summary> tag by newlines, so better use HTML markup.

Example:

This pull request removes 4 and adds 21 tests. Note that renamed tests count towards both.
test.integration.test_spark.SparkTests ‑ test_get_available_devices
test.integration.test_spark.SparkTests ‑ test_happy_run_elastic
test.integration.test_spark.SparkTests ‑ test_happy_run_with_gloo
test.integration.test_spark.SparkTests ‑ test_happy_run_with_mpi
This pull request skips 39 tests.
test.parallel.test_adasum_pytorch.TorchAdasumTests ‑ test_orthogonal
test.parallel.test_adasum_pytorch.TorchAdasumTests ‑ test_parallel
test.parallel.test_mxnet.MXTests ‑ test_gluon_trainer
test.parallel.test_mxnet.MXTests ‑ test_gpu_required
test.parallel.test_mxnet.MXTests ‑ test_horovod_allreduce_cpu_gpu_error
...

Distinguish between golang test and subtests

Golang's table-driven tests used along with go-junit-report generates JUnit files such as this.

	<testsuite tests="28" failures="0" time="0.031" name="github.com/Checkmarx/kics/pkg/engine">
		<properties>
			<property name="go.version" value="go version go1.15.6 linux/amd64"></property>
			<property name="coverage.statements.pct" value="35.1"></property>
		</properties>
		<testcase classname="engine" name="TestMapKeyToString" time="0.000"></testcase>
		<testcase classname="engine" name="TestMapKeyToString/mapKeyToString-0" time="0.000"></testcase>
		<testcase classname="engine" name="TestMapKeyToString/mapKeyToString-1" time="0.000"></testcase>
		<testcase classname="engine" name="TestMapKeyToString/mapKeyToString-2" time="0.000"></testcase>
		<testcase classname="engine" name="TestMapKeyToString/mapKeyToString-3" time="0.000"></testcase>
		<testcase classname="engine" name="TestMapKeyToString/mapKeyToString-4" time="0.000"></testcase>
		<testcase classname="engine" name="TestComputeSimilarityID" time="0.000"></testcase>
		<testcase classname="engine" name="TestComputeSimilarityID/Changed_file_name" time="0.000"></testcase>
		<testcase classname="engine" name="TestComputeSimilarityID/Changed_queryID" time="0.000"></testcase>
		<testcase classname="engine" name="TestComputeSimilarityID/Changed_searchKey" time="0.000"></testcase>
		<testcase classname="engine" name="TestComputeSimilarityID/Changed_filepath_dir" time="0.000"></testcase>
		<testcase classname="engine" name="TestComputeSimilarityID/No_changes" time="0.000"></testcase>
		<testcase classname="engine" name="TestComputeSimilarityID/Relative_directory_resolution" time="0.000"></testcase>
		<testcase classname="engine" name="TestComputeSimilarityID/No_changes,_empty_searchValue" time="0.000"></testcase>
		<testcase classname="engine" name="TestStandardizeFilePathEquals" time="0.000"></testcase>
		<testcase classname="engine" name="TestStandardizeFilePathEquals/Clean_input" time="0.000"></testcase>
		<testcase classname="engine" name="TestStandardizeFilePathEquals/Cleanup_double_slashes" time="0.000"></testcase>

Currently, the comment shows a very high number of unit tests, because of the subtests:
image

It would be great if we could have a configuration setup to differentiate the tests (e.g: TestMapKeyToString) from subtests (e.g: TestMapKeyToString/mapKeyToString-0) and count them separately in the PR comments.

Testing dependabot branch fails with "Resource not accessible by integration"

When running my workflow on dependabot branch dependabot/npm_and_yarn/typescript-eslint/parser-4.18.0 I am receiving the following error:

2021-03-16T14:49:50.9184853Z Traceback (most recent call last):
2021-03-16T14:49:50.9185525Z   File "/action/publish_unit_test_results.py", line 89, in <module>
2021-03-16T14:49:50.9186069Z     main(settings)
2021-03-16T14:49:50.9186877Z   File "/action/publish_unit_test_results.py", line 32, in main
2021-03-16T14:49:50.9187889Z     Publisher(settings, gh).publish(stats, results.case_results)
2021-03-16T14:49:50.9188632Z   File "/action/publish/publisher.py", line 59, in publish
2021-03-16T14:49:50.9189519Z     check_run = self.publish_check(stats, cases)
2021-03-16T14:49:50.9190217Z   File "/action/publish/publisher.py", line 163, in publish_check
2021-03-16T14:49:50.9190781Z     output=output)
2021-03-16T14:49:50.9191375Z   File "/action/githubext/Repository.py", line 78, in create_check_run
2021-03-16T14:49:50.9192995Z     headers={'Accept': 'application/vnd.github.antiope-preview+json'},
2021-03-16T14:49:50.9194475Z   File "/usr/local/lib/python3.6/site-packages/github/Requester.py", line 319, in requestJsonAndCheck
2021-03-16T14:49:50.9195678Z     verb, url, parameters, headers, input, self.__customConnection(url)
2021-03-16T14:49:50.9196789Z   File "/usr/local/lib/python3.6/site-packages/github/Requester.py", line 342, in __check
2021-03-16T14:49:50.9197860Z     raise self.__createException(status, responseHeaders, output)
2021-03-16T14:49:50.9200050Z github.GithubException.GithubException: 403 {"message": "Resource not accessible by integration", "documentation_url": "https://docs.github.com/rest/reference/checks#create-a-check-run"}

The full logs can be seen here. The tests are XML formatted coburtura test results, I have attached the test files used for this workflow below: Unit Test Results.zip

Any help would be appreciated.

Error while hiding comments in PullReq on GitHub Enterprise

Using GitHub Enterprise Server 3.0.0.

Workflow file (relevant parts):

name: Java CI with Maven

on:
  push:
    branches: [ master ]
  pull_request:
    branches: [ master ]

jobs:
  build:
    runs-on: [ ... ]
    steps:
    - uses: actions/checkout@v2

    - name: Set up JDK 11
      uses: actions/setup-java@v1
      with:
        java-version: 11

    - name: Build with Maven
      run: ./mvnw -B verify --file pom.xml

    - name: Publish Unit Test Results
      uses: EnricoMi/[email protected]
      if: always()
      with:
        files: target/surefire-reports/**/*.xml
      env:
        GITHUB_API_URL: https://<our.fqdn.here>/api/v3/

Error received:

Run EnricoMi/[email protected]
  with:
    files: target/surefire-reports/**/*.xml
    github_token: ***
    check_name: Unit Test Results
    hide_comments: all but latest
    comment_on_pr: true
    pull_request_build: merge
    check_run_annotations: all tests, skipped tests
    log_level: INFO
  env:
    JAVA_HOME_11.0.10_x64: /home/runner/_work/_tool/jdk/11.0.10/x64
    JAVA_HOME: /home/runner/_work/_tool/jdk/11.0.10/x64
    JAVA_HOME_11_0_10_X64: /home/runner/_work/_tool/jdk/11.0.10/x64
    GITHUB_API_URL: https://<our.fqdn.here>/api/v3/
/usr/bin/docker run --name b1cbc511eb0319cad24c82bd9c7f0910960709_aaa4bd --label b1cbc5 --workdir /github/workspace --rm -e JAVA_HOME_11.0.10_x64 -e JAVA_HOME -e JAVA_HOME_11_0_10_X64 -e GITHUB_API_URL -e INPUT_FILES -e INPUT_GITHUB_TOKEN -e INPUT_COMMIT -e INPUT_CHECK_NAME -e INPUT_COMMENT_TITLE -e INPUT_REPORT_INDIVIDUAL_RUNS -e INPUT_DEDUPLICATE_CLASSES_BY_FILE_NAME -e INPUT_HIDE_COMMENTS -e INPUT_COMMENT_ON_PR -e INPUT_PULL_REQUEST_BUILD -e INPUT_TEST_CHANGES_LIMIT -e INPUT_CHECK_RUN_ANNOTATIONS -e INPUT_CHECK_RUN_ANNOTATIONS_BRANCH -e INPUT_LOG_LEVEL -e HOME -e GITHUB_JOB -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_REPOSITORY_OWNER -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RETENTION_DAYS -e GITHUB_ACTOR -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_SERVER_URL -e GITHUB_GRAPHQL_URL -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e GITHUB_ACTION_REPOSITORY -e GITHUB_ACTION_REF -e GITHUB_PATH -e GITHUB_ENV -e RUNNER_OS -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -e ACTIONS_RUNTIME_URL -e ACTIONS_RUNTIME_TOKEN -e GITHUB_ACTIONS=true -e CI=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/_work/_temp/_github_home":"/github/home" -v "/home/runner/_work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/_work/_temp/_runner_file_commands":"/github/file_commands" -v "/home/runner/_work/<workspace.folder.redacted>/<workspace.folder.redacted>":"/github/workspace" b1cbc5:11eb0319cad24c82bd9c7f0910960709
2021-03-09 12:16:57 +0000 - publish-unit-test-results -  INFO - reading target/surefire-reports/**/*.xml
2021-03-09 12:16:57 +0000 - publish.publisher -  INFO - publishing success results for commit 421157be393553a7c6b3bde525dce2bb41305ba1
2021-03-09 12:16:57 +0000 - publish.publisher -  INFO - creating check
2021-03-09 12:16:58 +0000 - publish.publisher -  INFO - creating comment
Traceback (most recent call last):
  File "/action/publish_unit_test_results.py", line 181, in <module>
    main(settings)
  File "/action/publish_unit_test_results.py", line 67, in main
    Publisher(settings, gh, gha).publish(stats, results.case_results, conclusion)
  File "/action/publish/publisher.py", line 56, in publish
    self.hide_all_but_latest_comments(pull)
  File "/action/publish/publisher.py", line 354, in hide_all_but_latest_comments
    comments = self.get_pull_request_comments(pull)
  File "/action/publish/publisher.py", line 284, in get_pull_request_comments
    "POST", f'{self._settings.api_url}/graphql', input=query
  File "/usr/local/lib/python3.6/site-packages/github/Requester.py", line 317, in requestJsonAndCheck
    verb, url, parameters, headers, input, self.__customConnection(url)
  File "/usr/local/lib/python3.6/site-packages/github/Requester.py", line 340, in __check
    raise self.__createException(status, responseHeaders, output)
github.GithubException.UnknownObjectException: 404 {"message": "Not Found", "documentation_url": "https://docs.github.com/enterprise/3.0/rest"}

Note that test results themself are visible within GitHub Actions output - this part works correctly

Is GH Enterprise supported?
Looking at code I'm not sure if GITHUB_API_URL is supported from environment variables (not a Python expert).
Same error occurs without GITHUB_API_URL set in step.
Or maybe this is issue with action itself?

Support absolute path in `files` setting

I would like to collect test results somewhere in /tmp (so that I don't modify the checked out source tree that is the current working directory). Unfortunately, this fails with the following error:

Traceback (most recent call last):
  File "/action/publish_unit_test_results.py", line 202, in <module>
    main(settings)
  File "/action/publish_unit_test_results.py", line 52, in main
    files = [str(file) for file in pathlib.Path().glob(settings.files_glob)]
  File "/action/publish_unit_test_results.py", line 52, in <listcomp>
    files = [str(file) for file in pathlib.Path().glob(settings.files_glob)]
  File "/usr/local/lib/python3.6/pathlib.py", line 1098, in glob
    raise NotImplementedError("Non-relative patterns are unsupported")
NotImplementedError: Non-relative patterns are unsupported

Here is my configuration:

      - name: Publish Unit Test Results
        uses: EnricoMi/[email protected]
        # run even if tests failed
        if: always()
        with:
          files: /tmp/test-reports/**/*.xml

Status message on Github Action Summary Page

Hey,
On the summary page of failed github workflows, in the "Annotations" Section, the failed tests are displayed as warnings for me.
Furthermore, they always recommend to check line 0 in the failed file. (See image below)
Captura de pantalla de 2020-10-23 14-50-18

Do you know what the reason for this is?
Best regards
Anton

Tests results are sometimes published into wrong job

We have a few workflows and one them tests code and publishes tests results. Sometimes, however, this actions publishes tests results into a wrong job (see the screenshot - test result were published into a different workflow, the one that was building documentation).

image

Use special GitHub Action status log format

Some action output formatted in a special way is picked up by GitHub and put as annotations to the Workflow page. Use this for warnings and errors of the publish action.

Make PR comments more customizable

I really like the feature to have the test results in PR comments. However I think these comments can get out of hand on very active PRs.

I would really like to see an option to make the bot only comment once and edit the comment for every new test run like the codecov bot does. This way active PRs wouldn't be full of bot comments.

PR comments should probably also be optional and one should be able to turn them off in the GitHub workflow. I couldn't find anything like this in the readme.

Make action comment on Pull Requests once and edit

In #28 (comment) we have discussed that the action should allow to configure it commenting on PRs only once and later edit that comment with latest test results. Then, the test results stay at the top of the PR (always at the same position) and the PR does not fill up with more and more test result comments.

Provide link to failing test and line in annotations

Question #32 made me think of linking to the failing test and line form the annotations. Given the commit sha, test file and line, the action could generate a link like this and add to the respective annotation:

https://github.com/EnricoMi/publish-unit-test-result-action/blob/de7f7f0c5f7694846ce69e3384f2e5a03253c141/test/test_publish.py#L35

Such a link gets nicely rendered by GitHub:

self.assertEqual(get_formatted_digits(None), (3, 0))

Providing a few lines before and after gives a nice context:

def test_get_formatted_digits(self):
self.assertEqual(get_formatted_digits(None), (3, 0))
self.assertEqual(get_formatted_digits(None, 1), (3, 0))
self.assertEqual(get_formatted_digits(None, 123), (3, 0))

Let's hope that also holds for annotations. It probably won't because annotations do not support markdown.

Resolving the test file path from the test result file will be challenging. This might work best with some configuration like how many directories to remove from the path or which path to prepend.

Add annotations to changed files

Hey, I really like your library!
Is there a plan currently to implement adding annotations so that they can be seen in the "files changed" tab?
Best regards
Anton

Make cross for failed tests red

For me (Windows 10, Firefox and Chrome) the cross ✖️ for failed tests is grey and not red like in the screenshots.
I feel like failed tests get lost this way and should rather be bright red. You could use this unicode character instead: ❌

Make action reduce number of comments on PR

On #28 (comment) we have discussed that the action could reduce the number of comments that it creates by reusing its latest comment until other comments are added by other users. This way, the unit test results stay at the end of the PR conversation, but not every commit causes a new comment. This is an alternative behaviour between only commenting once (#38) and always commenting, the current behaviour. This could be combined with hiding older comments (#36).

Parsing fails if test case result is empty

Parsing fails when the test case result is missing attributes or content

A Java JUnit 5 test generates the following XML test case, which fails parsing and breaks publishing.

<testcase name="test" classname="test.Test" time="0.042">
  <skipped/>
</testcase>

fails with the message

TypeError: argument of type 'NoneType' is not iterable

The culprid seems to be missing null checks here:

message=unescape(case.result.message) if case.result else None,
content=unescape(case.result._elem.text) if case.result else None,

Fails if run by schedule

https://github.com/ned14/llfio/runs/1477349816?check_suite_focus=true

Run EnricoMi/[email protected]
  with:
    check_name: Unit Test Results
    github_token: ***
    files: **/merged_junit_results.xml
    log_level: INFO
/usr/bin/docker run --name ac1af7e4c4db2a986807a5c45326c_1a9922 --label 179394 --workdir /github/workspace --rm -e INPUT_CHECK_NAME -e INPUT_GITHUB_TOKEN -e INPUT_FILES -e INPUT_REPORT_INDIVIDUAL_RUNS -e INPUT_DEDUPLICATE_CLASSES_BY_FILE_NAME -e INPUT_LOG_LEVEL -e HOME -e GITHUB_JOB -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_REPOSITORY_OWNER -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RETENTION_DAYS -e GITHUB_ACTOR -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_SERVER_URL -e GITHUB_API_URL -e GITHUB_GRAPHQL_URL -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e GITHUB_ACTION_REPOSITORY -e GITHUB_ACTION_REF -e GITHUB_PATH -e GITHUB_ENV -e RUNNER_OS -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -e ACTIONS_RUNTIME_URL -e ACTIONS_RUNTIME_TOKEN -e ACTIONS_CACHE_URL -e GITHUB_ACTIONS=true -e CI=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/_temp/_runner_file_commands":"/github/file_commands" -v "/home/runner/work/llfio/llfio":"/github/workspace" 179394:727ac1af7e4c4db2a986807a5c45326c
Traceback (most recent call last):
  File "/action/publish_unit_test_results.py", line 793, in <module>
    commit = get_var('COMMIT') or get_commit_sha(event, event_name)
  File "/action/publish_unit_test_results.py", line 765, in get_commit_sha
    raise RuntimeError("event '{}' is not supported".format(event))
RuntimeError: event '{'schedule': '0 0 1 * *'}' is not supported

Unexpected input(s) 'commit'

I'm running v1.6 and provided a git sha as shown in the README.

However, I get the following warning:

Warning: Unexpected input(s) 'commit', valid inputs are ['entryPoint', 'args', 'github_token', 'check_name', 'files', 'report_individual_runs', 'deduplicate_classes_by_file_name', 'hide_comments', 'comment_on_pr', 'log_level']

Support scheduled action event

When trying to publish result from an scheduled execution, we get an error like:

Traceback (most recent call last):
  File "/action/publish_unit_test_results.py", line 845, in <module>
    commit = get_var('COMMIT') or get_commit_sha(event, event_name)
  File "/action/publish_unit_test_results.py", line 812, in get_commit_sha
    raise RuntimeError("event '{}' is not supported".format(event))
RuntimeError: event '{'schedule': '0 * * * *'}' is not supported

it should be easy to support just by adding a new entry to:

def get_commit_sha(event: dict, event_name: str):

Comment hard codes "Unit Test Results" instead of $check_name

See

pull.create_issue_comment('## Unit Test Results\n{}'.format(get_long_summary_md(stats_with_delta)))

I was expecting to find the defined check_name used here.

The result is if you have more than one instance of this action in use (e.g. for "Integration tests" and "Unit tests"), they both get the same text in the comment and you can't easily tell which is which.

I'm happy to provide a PR for this if it is accepted.

Disable comparisons to earlier test runs

Hey, thanks for writing this lovely action!

I'm trying to integrate it into our work project and I'm running into a specific issue around comparisons with earlier commits: the action does a good job doing so, but I'd love a way to disable those comparisons.

The reason we need to do that is that we have some smarts built into our test runner which only runs a subset of tests based on which parts of the codebase were changed. So the number of tests run varies wildly from commit to commit.

Alternatively, we could potentially provide the baseline in some format (e.g. in the JUnit format as skipped tests). If that works, it should be possible to provide better data as well -- e.g. if a test gets added and another gets removed, you'd able to say "+1, -1".

What do you think?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.