-
-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider adopting the Open Test Reporting format #13045
Comments
Sounds reasonable to support. Not sure if we'd rush into it, though. One way could be starting this as a plugin with future adoption into core if it proves useful. I remember facing xunit2 limitations (#7537) so having something new that might potentially addressing such things be useful. @marcphilipp I haven't looked into the spec but have you considered custom test result statuses? Pytest has a few that I haven't seen anywhere else (xfail/xpass, for example) and some plugins extend that with more custom status names. |
That sounds like a good strategy! 👍
Currently, there's a predefined list: https://github.com/ota4j-team/open-test-reporting/blob/66b1f088b599eaecb982eb9a3ccaa15a29c1e3ec/schema/src/main/resources/org/opentest4j/reporting/schema/core-0.2.0.xsd#L93-L100 Can your statuses all be mapped to those? Additionally, you could define a custom <result status="SUCCESSFUL">
<pytest:status="xpass"/>
</result> |
Well, those statuses exist because the semantics is different and they should be represented separately. Like xpass means that a test is expected to fail because of some unfixed bug in the tested code but it didn't, which is unexpected. And in strict mode, it'd not evaluate to success but to a failure, when a test started passing suddenly. And xfail is the opposite — we expect that a test against broken code fails and it does so that doesn't fail the entire test session but shows that the expectations match. |
A few explanations around test statuses:
Those can all be mapped to success/skipped/aborted/failure, but it's a somewhat lossy operation as those all are semantically somewhat different to that list. I believe the topic came up in the past a few times, with the desire to have the semantics show up properly in "export" reporting formats as well. |
In JUnit that would be reported as a failure of the "container" of the test function (in our case usually a test class) but I sounds to me like
Thanks for your explanations! I'm up for trying to model these more precisely. I don't want to make it completely generic because I'd like tools to be able to interpret. Maybe by adding two additional ones like this?
Would those be reported as a separate test run or within the same run? |
Rerunfailures is within the same test run but it would trigger multiple reports |
@webknjaz I am surprised that you've never seen these extremely common statuses before. :) Automake supports them too: https://www.gnu.org/software/automake/manual/html_node/Generalities-about-Testing.html TAP supports the underlying concept and calls it "TODO tests": https://testanything.org/tap-version-14-specification.html#todo-tests https://mesonbuild.com/Unit-tests.html implements xpass and xfail as something we "copied from automake", including support for automake's testsuite harness handling of tests that exit 77 to indicate a unittest SKIP, and exit 99 to indicate a unittest ERROR. Anyway, I agree these are extremely useful statuses that probably every testing framework should want to implement. |
They are, but since (as @webknjaz notes) there's the added wrinkle that those semantics can change at runtime (for example, based on whether or not strict mode is enabled), then for a common reporting format they probably should be mapped, and the mapping should change accordingly. IOW, if strict mode transforms a passing (Edit: The other option would be for the reporting to have a separate severity field, to complement the test result field. IOW, an "xfail" test could report |
TL;DR The JUnit team has defined a new language-agnostic test reporting format and implemented a CLI tool for validation, conversion, and HTML report generation. We're reaching out to well-known testing frameworks and reporting tools to ask for feedback and, ultimately, adoption, if you think this format provides value to your users.
Motivation and Context
You've probably come across the "JUnit XML" format for test reporting. This format did not originate from the JUnit project but was initially introduced by the Ant build tool and then adopted by other Java build tools like Maven and Gradle. Many build servers know how to parse the XML-based format, and even non-Java tools often support it. However, it’s based on the concept of test classes and methods, so using it for frameworks and tools where those elements are not present is awkward at best. Moreover, it does not support nested structures beyond a simple parent-child relationship. Finally, it is not extensible: no additional attributes can be added without the risk of breaking existing tools.
For those reasons, many testing frameworks (for example, TestNG and Spock in the Java ecosystem) have defined their own reporting formats. This has given them the flexibility they need, but the number of tools that can parse, display, or transform their custom formats is very limited.
To overcome these limitations, the JUnit team is defining a new format for test reporting. Its goal is to be platform-agnostic so that as many testing frameworks as possible can benefit from it. Moreover, it is designed to be extensible so new data can be added as needed, without breaking consumers. However, all well-known attributes are properly defined so it remains consumable by downstream reporting tools.
Of course, it will take a while for downstream tools to support the new format. However, as the number of testing frameworks that have adopted it increases, the more likely downstream tools are to do so as well.
Overview
The new format is based on XML because it provides more expressive ways to define schemas. Moreover, XML has typed extensions built-in via the use of multiple schemas. If a testing framework provides a listener mechanism, it should be possible to write an Open Test Reporting XML file from an extension.
Benefits
Next Steps
The JUnit team would be happy to get your feedback on this initiative. We can discuss here or you're welcome to start a thread or open an issue in the Open Test Reporting repo. Should you consider adopting the new format, we'd be happy to provide guidance but we won't have the resources to actually contribute an implementation.
This is a bit of an unusual request so please forgive me for not sticking to the issue template.
The text was updated successfully, but these errors were encountered: