Comments (33)
Hi @jawnsy
This is sadly a problem of GitHub, which you can find more about here:
https://github.community/t/github-actions-status-checks-created-on-incorrect-check-suite-id/16685
Hoping them providing a fix for that soon
from action-junit-report.
@koyama-yoshihito as of this discussion here, the issue is still not fixed by GitHub: community/community#14891
from action-junit-report.
@laughedelic without offering a configuration where multiple imports can be done at once, I feel that the current summary APi exposed by the core
library of GitHub won't offer that possibility yet.
But given a sample summary of this: https://github.com/mikepenz/action-junit-report/actions/runs/2717236074#summary-7464569272 it seems like it's not too bad.
What also is possible already today is to enable the annotate_only
mode which will not create a check run anymore. Meaning only the summary will be created.
from action-junit-report.
As a workaround you could use the same approach as other actions (i.e. https://github.com/EnricoMi/publish-unit-test-result-action) and add in the summary a link to the check run.
It doesn't solve the issue, but one at least is able to find the "details" in one click...
from action-junit-report.
hi @AkhremDmitry as far as I am aware no new API was introduced to fix the root problem of this ticket.
However one thing I use in different projects these days is to only have annotations and the job summary:
annotate_only: true
detailed_summary: true
And having the main build running the tests fail, so it would not cause such wrong association.
from action-junit-report.
Hi there- unfortunately, this issue makes this Action pretty unusable for us since the check run output is the only place our test suite failures are visible and when they get added to some other "random" Workflow, they're impossible to find.
I waded through some of the related issues and there seems to be a potential workaround:
JUnit reporter github action should update existing check instead of creating new one.
Get check which triggered action:
https://docs.github.com/en/rest/reference/checks#get-a-check-runUpdate this check with provided information about test results:
https://docs.github.com/en/rest/reference/checks#update-a-check-run
If I understand correctly, this would add all of your annotations to the actual Job where I'm using the step (our tests), instead of as a new check that is getting misplaced in some other Workflow.
Besides being a workaround for this issue, this is actually exactly where I was trying to find these annotations to begin with. I think it's a much more useful place to put them. Even if the check was showing up under the correct Workflow, I think users would still be looking for failures in the existing, failed "Test Job" check and not some separate "JUnit Reporter" check (I know we could configure the name to be more obvious, but still).
Am I missing anything? Would you consider implementing such functionality?
from action-junit-report.
@pbrisbin thank you very much for the added notes.
It sure sounds like an opportunity to use those APIs and make it so it will only update the current run. Would need to look into it.
Do you maybe know an action already using this approach?
from action-junit-report.
Hmm, I can't say that I do. A lot of Actions add annotations to the Job's check where they're run, but they do it by outputting messages with a problem-matcher attached, not by updating the Check. I suppose that's another alternative you could consider.
Looking at the docs a bit more closely, I think it would look something like,
const ref = head_sha
const check_name = github.context.job // assumes check names are job.<job-id>
const filter = 'latest'
const checks = await octokit.rest.checks.listForRef({
...github.context.repo
ref,
check_name,
filter
});
// assumes a check for our job exists if we're running, probably want some error-handling
const check_run_id = checks[0].run_id
// as per docs: "Each time you update the check run, annotations are
// appended to the list of annotations that already exist for the check run."
oktokit.rest.checks.update({ owner, repo, check_run_id, ... })
Oh, and I noticed you're currently only sending your first 50 annotations. I suppose it's reasonable to expect a test suite in a to never have more than 50 test failures in a given run, but you could send everything if you call this .update()
multiple times, each with fewer than 50.
The Checks API limits the number of annotations to a maximum of 50 per API request. To create more than 50 annotations, you have to make multiple requests to the Update a check run endpoint. Each time you update the check run, annotations are appended to the list of annotations that already exist for the check run
from action-junit-report.
@pbrisbin there's a version coming up using the outlined APIs. Sadly those lead to a different result than anticipated, and due to the check still being running at the time of updating the result, it won't keep the failure state for example.
Just for reference, there is still no API to create a check in a specific suite / workflow sadly:
https://github.community/t/specify-check-suite-when-creating-a-checkrun/118380/9
https://github.community/t/associate-a-check-run-with-a-workflow/122540
https://github.community/t/associating-a-check-run-with-a-specific-action-run/126682
from action-junit-report.
Awesome!
due to the check still being running at the time of updating the result, it won't keep the failure state for example
Let me make sure I understand...
So, if I use this in a app-test
job for example, it runs tests then does the JUnit thing, the Job will pass/fail based on the tests and not the JUnit processing? That seems... desired? I kind of think the only reason the JUnit action needs to control the failure at all is because it's doing that separate-check thing. When it's part of the test's check, the tests setting the check result just makes sense (to me).
Am I missing something?
from action-junit-report.
@pbrisbin you are actually right, that particular thing may not be an issue in this case.
One thing seems to be a problem though. The way the annotations are rendered if we use the update
approach:
vs.
https://github.com/mikepenz/action-junit-report/runs/4279043601
from action-junit-report.
Wow, well that's super annoying. I wonder why that is 🤔
from action-junit-report.
I might be grasping at straws, but in your PR, it looks like the Update sets { output.conclusion }
while the Create sets { conclusion, output.* }
. Is it possible that matters?
from action-junit-report.
@pbrisbin sadly this wasn't it, but this makes the build to fail, while it's still ongoing, breaking a few things :D
https://github.com/mikepenz/action-junit-report/actions/runs/1492172809
After it's done, it will change it's state to successful in this particular build
from action-junit-report.
Ha, not unexpected in hindsight. I would say that when doing the update path you don't want conclusion
at all -- you aren't intending to conclude it right?
from action-junit-report.
I wonder if the test's conclusion/output is somehow taking over as the "important one". So your annotations remain, but they get subdued in the UI once the test completes?
Either way, totally annoying. And sorry if this ends up a dead-end.
from action-junit-report.
@pbrisbin I feel somehow that this is potentially an issue with the github API. the documentation itself would not indicate any behavior difference as far as I found briefly: https://docs.github.com/en/rest/reference/checks#update-a-check-run
Thank you for also looking into it. Really appreciated!
from action-junit-report.
Yeah I agree. Looking at the API docs between Create and Update the POST bodies are identical, you would expect the result to be the same too.
from action-junit-report.
There seems to be more functionality that doesn't work for the update vs create:
I don't see the annotations appear in the diff. Unfortunately, I think not seeing the annotations in-line is a bridge too far for us to use this style.
I also notice here that the Process completed with exit code 1. annotation actually includes the Job/Check name (tts-test), but the others do not -- as if they're attached to the Workflow but not any one Job/Check. I wonder if that's somehow related.
from action-junit-report.
@pbrisbin that's unfortunate :/
I haven't had the chance to check if there have been other reports on the update
endpoint behaving differently, or if there has anything been mentioned on the forums.
from action-junit-report.
Oh, that may actually be our fault. I need to investigate where file/line information is meant to live in a JUnit XML. Our formatter seems to just have it as part of the failure message, but that's probably not right.
Notice the "Line 1 in {spec description}", even though {file}:{line}
is there as the first part of the message.
from action-junit-report.
@pbrisbin sadly it seems line numbers are not consistent between different output formats.
Fun enough there was a feature request today to add support for handling the line
attribute in the report.xml:
There surely is some way to expand support for handling it. It should already handle retrieving file numbers from file paths as you show in your screenshot:
https://github.com/mikepenz/action-junit-report/blob/main/src/testParser.ts#L58-L60
maybe if you have a sample output we. can use it to expand test cases and see why it does not work for your case
from action-junit-report.
Oh, that's convenient!
Here is an example of what it is right now: golden.xml
But we author this formatter, so I can pretty easily add attributes.{file,line}
, which it looks like would get picked up before any inferring is done.
from action-junit-report.
I didn't realize. In this case I'd propose to add the line
attribute :) for direct support.
And thanks for providing the xml
. I see, so we can't parse it from the description. if it was part of the file
, it would parse it already
from action-junit-report.
So you are saying that if we added file="{file}:{line}"
that would also work?
line numbers are not consistent between different output formats
And there isn't any one way to do that is more common or standardized than any other?
from action-junit-report.
I believe using the line
attribute is the most stable form, as it's shown in this junit format xunit export: https://github.com/mikepenz/action-junit-report/blob/main/test_results/xunit/report.xml
In cases neither is available (line via the file, or via the line attribute) the action tries to do some more searching to find the the line number in the stacktrace: https://github.com/mikepenz/action-junit-report/blob/main/src/testParser.ts#L53-L55
I'll check again and see why it wouldn't match the description in your case as it should be the stacktrace.
Did some search but the xsd for junit output for example does not specify line
. so it seems generally this is an extension to the spec.
from action-junit-report.
@mikepenz
Hi!
Thank you for developing this nice plugins.
I am looking for a way to solve this issue.
Is there any progress or workaround?
from action-junit-report.
@mikepenz
We just keep watching this discussion.
Thanks for info.
from action-junit-report.
Could this feature help with the issue: https://github.blog/2022-05-09-supercharging-github-actions-with-job-summaries?
from action-junit-report.
@laughedelic thank you for the link. The newest version of this action does publish a job summary.
While it seems to be a partial solution I think it's not fully covering the whole element.
Sample display:
https://github.com/mikepenz/action-junit-report/actions/runs/2676469277
Sadly this would loose the great feature of seeing the test results within the reported checks:
We could add a flag to offer disabling the report being sent and only having a summary - for people who'd prefer that.
from action-junit-report.
Glad to know that it's already in use! 👏
I think the summary in the checks list is a very nice feature, but unfortunately, when there are a lot of tests ran in parallel, the number of check summaries is doubled, so it would be good to either be able to disable it, or consolidate all summaries by some tag (as suggested in #195).
from action-junit-report.
Btw. regarding multiple imports as one. Please see my comment here: #195 (comment) with the current (in-progress) PR
from action-junit-report.
Hi @mikepenz
Thanks a lot for this action!
It works fine while workflow is triggered by merge or PR.
But I've faced with problem that looks like related to this issue.
I have workflow that triggered on schedule in that case all JUnit reports are created in another workflow run.
Can I somehow customise runId where the check should be created?
from action-junit-report.
Related Issues (20)
- Doesn't include errors HOT 3
- Annotation truncates failure output/callstack HOT 6
- Skipping all Annotations HOT 14
- Support ESLint or Jest junit reports HOT 6
- Different line number representation (xunit) HOT 3
- Weird behavior with multiple `report_paths` and failures not being processed HOT 4
- Test report stopped working without change HOT 8
- require_tests flag is not working HOT 5
- Editor points out that github_token is required HOT 1
- Skipped tests are show as passed when included_passed set to true HOT 3
- Test failures not correctly recognized HOT 8
- action using deprecate 'set-output' command HOT 2
- Documentation for Maven Surefire HOT 2
- Out of memory error on large project HOT 14
- Failed tests are not shown in Annotations HOT 9
- Check Run Not Created when update_check: true HOT 2
- act error: Error: The runs.using key in action.yml must be one of: [composite docker node12 node16], got node20 HOT 4
- Invalid character entity when using with dastardly HOT 6
- Error: Converting circular structure to JSON HOT 9
- Option to explicitly fail check run HOT 8
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from action-junit-report.