You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
There are situations in which it is useful to automatically add or update pieces of the "expectations" section, based on test results from a previous test run.
Examples of these situations:
Resolving the impact of adding new result tags to a template
Development of a liquid test suite has been on the shelf for some time, liquid code updates were done in the meantime breaking many tests, while there is no code error
Creating liquid tests for templates that have "proven" quality and did not have recent updates
Describe the solution you'd like
The solution would be to have a specific CLI command that picks up a certain test output and plugs the results and rollforwards into the YAML in the correct format, overwriting previous versions of the same key.
Describe alternatives you've considered
A workable alternative is doing this on a per test basis (just as with liquid tests themselves, just run it for a single test result on a single test).
Additional context
There is a moral hazard involved: a script like this can be used to make a test suite pass, while it in fact contains liquid bugs. It may therefore not be used when the liquid code is just updated, for instance. This should be carefully considered if it is decided to (partially) implement this feature.
The text was updated successfully, but these errors were encountered:
As we briefly discussed before, I think we may consider looking into something like this, but always should be requested by the user and not added by default. Some kind of confirmation needed, to avoid the potential issue that you are mentioning (abusing of it to make the test pass).
I though about this before, but I was considering adding it to the VS Code extension rather to the CLI. In the extension there is something called "quick fixes", where it can propose to solve the errors that are present in your file. So, when it detect an error related to a missing row, the extension could propose to you to add the row with the key/value pair provided by test run.
How much development work does the VS code extension require from our side (approximately)? I mean, is it just setting some parameters of an already existing feature, or do we really need to build this.
And what would be the most user-friendly way of "selecting" test results for a certain test run?
We have already implemented quick fixes for one case: when the exception set doesn't match with the one got from running the test.
So we would need to cover this different scenario (creating a new row for missing expectations)
Woetfin
changed the title
Liquid testing CLI - Automatically add or update test output into YAML
Liquid testing: automatically add or update test output into YAML
Aug 14, 2023
Is your feature request related to a problem? Please describe.
There are situations in which it is useful to automatically add or update pieces of the "expectations" section, based on test results from a previous test run.
Examples of these situations:
Describe the solution you'd like
The solution would be to have a specific CLI command that picks up a certain test output and plugs the results and rollforwards into the YAML in the correct format, overwriting previous versions of the same key.
Describe alternatives you've considered
A workable alternative is doing this on a per test basis (just as with liquid tests themselves, just run it for a single test result on a single test).
Additional context
There is a moral hazard involved: a script like this can be used to make a test suite pass, while it in fact contains liquid bugs. It may therefore not be used when the liquid code is just updated, for instance. This should be carefully considered if it is decided to (partially) implement this feature.
The text was updated successfully, but these errors were encountered: