-
Notifications
You must be signed in to change notification settings - Fork 30.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
doc: add restrictions around node:test usage #56027
base: main
Are you sure you want to change the base?
Conversation
Review requested:
|
|
||
These dependencies are: | ||
|
||
- `node:async_hooks` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be honest, async_hooks
is probably the only thing I would include here (and I may update the test runner to migrate off of that in the future).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good to me, but before removing the rest, what's your reasoning for keeping only async_hooks
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
async_hooks
actually changes how things work.
child_process
and fs
are so heavily depended on by other things that if they stop working we will definitely notice. The test runner also doesn't do anything "fancy" with them. You can also use the test runner without spawning child processes. But, child processes are only used by the test runner CLI, which Node core doesn't use at all anyway.
The only place the test runner uses a stream is for emitting events. If you were going to include that, you may as well include event emitter as well since it is part of streams.
The vm
module is only used (directly) for evaluating snapshot files.
Also worth noting that the test runner is already used to test the test runner itself 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you recommend changes to the text, please?
- `node:child_process` | ||
- `node:fs` | ||
- ReadableStream in `node:streams` | ||
- `node:vm` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the files listed in test/parallel/test-bootstrap-modules.js
can be a good measure here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean? I don't follow
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's probably worth adding anything related to the bootstrapping process to the list of things not to test with the test runner since I'm not sure you can be 100% certain the test runner itself is bootstrapped properly at that point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(Most of) the files listed there are essential parts of the Node.js functionality that are used more ubiquitously, hence more likely to be depended on by node:test
itself (e.g. when we talk about node:async_hooks
, that's actually built on top of other modules, not just itself, test/parallel/test-bootstrap-modules.js
list a set of files that are generally used everywhere)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you recommend changes to the text, please?
Some issues I've found with
|
Co-authored-by: Rafael Gonzaga <[email protected]>
@@ -141,6 +141,26 @@ request. Interesting things to notice: | |||
|
|||
## General recommendations | |||
|
|||
### Usage of `node:test` | |||
|
|||
It is optional to use `node:test` in tests outside of testing the `node:test` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems important to document things like #52177 or otherwise we would see more flakes coming up once people start to spawn hundreds of child processes in parallel and overloading the machine using spawnPromisified
+ node:test
....
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed. Using the concurrency option is fine though unless you are specifically planning to spawn child processes. But that applies to things like Promise.all()
as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you recommend changes to the text, please?
I think people will have different opinions on this. When there are a number of tests in a single file and I am relying on the CI for some platform other than macOS I actually want to see everything that passes and fails before pushing up another commit. Another general solution is to not test multiple things in a single file.
The failures should be at the very bottom?
This drives me crazy as well. I'm not sure if this is something specific to Jenkins that can be fixed or what. I haven't noticed it in GitHub action for example. |
From a quick check, it does appear that Deno and Bun both use their own test runners in at least some places. Of course, I didn't check every test and I don't know what policies they might have in place around that. |
I certainly expect using a test runner to add overhead that isn't there when not using a test runner. A few things to note: this sounds like web frameworks claiming to be x% faster at serving an empty response, but this often goes away once any real logic is introduced. Also, the test runner bootstraps itself when the first test is run, so I would expect subsequent |
It is indeed opinionated. I prefer to stop at the first failure, fix it, rerun, repeat.
Definitely.
I honestly don't care how other projects run their tests and I think there is nothing wrong with our tests. I actually think the way our tests are currently written and run is one of the best part of the project. No bullshit, only the strictly needed code and dependencies. I am very convinced that a refactor to use Instead of harmful refactors, I think that our time is better spent on investigating and fixing dozen of tests marked flaky and issues like #54918. |
@lpinca I've mentioned couple of upsides of using node:test using the last TSC meeting. I recommend watching it since it also includes several different opinions from other TSC members as well. |
There is a difference between running it on CI and locally. For example we use different output settings in the python test runner locally and in the CI as well. I think it would make more sense to align with what we do with the Python test runner: low-noise output when run locally, more details in the CI.
The logs are not, and are in the middle of a bunch of passing test descriptions that you need to ignore - and when you are debugging a test failures, you mostly care about the assertion failure and logs, not the test descriptions (especially when there's no requirement about writing good test descriptions and they might just be random words that people put together...). Also, this is assuming that only a single test is run. When multiple tests are failing during a run by the Python test runner, you are still going to have to scroll and fish out failure form pages of noise from later tests, instead of just looking at only relevant error information from all the tests that are failing. At the very least, is there e.g. an environment variable that allows us to skip the logs about successful tests? It can be opinionated but personally I find them rather counter productive especially when I put any logs in the tests to aid debugging. |
Isn't that going to be in conflict with the recommendation of:
? The more singled out tests we have, the more overhead we will introduce; but if we squeeze the tests in one file, the reporter will make the test failures harder to fish out from the noise? Also many core tests are just very light weight - they are core tests, after all, and many of them don't test complex operations but just trivial edge cases (validation errors, simple calls to deps, pure JS computations etc.). In many cases the biggest part of the overhead is the bootstrapping overhead and the actual tests actually take less time than the bootstrap itself. Of course there are also tests that are more complex and async e.g. the http tests, which I think might benefit from using node:test. But I think we should also have some guidelines about when to avoid using node:test on other smaller tests (e.g. many of the util tests). |
I'm not the person advocating for a massive refactor and never have been. I will say that I would not put that in the top 10 all time worst mistakes made by the project, or even top 3 in the past 6 months 😄
Yes. Some of the tests are currently written with many tests in the same file. We should aim to change that.
This. It would be nice to have more subtlety around this topic. I was mostly just commenting here because I think these threads are mixing valid feedback, personal opinions, and things presented as fact that are incorrect based on people's opinions. I'll stop now. |
@anonrig I've just finished watching it. The following arguments in favor of
I'm getting bored of repeating myself but don't add complexity where it is not needed, especially in tests. We only hurt ourselves with those refactors. |
I think maybe a good measure about this might be: only the people maintaining what the test is testing get to choose what format the test should be written in, and forbid test-only changes from PRs that don't simultaneously change the features that the test is testing (unless they obviously had many commits in said feature or they reach consensus about this if there are multiple people maintaining said feature). The reasoning is that those who maintain the feature being tested would be the ones impacted the most by the test change and they should have better judgement about whether the test format changes can make their life easier or harder. That's what I have been doing so far as well - if I am touching an existing test, I respect whatever the test format it is in and just follow it. But I otherwise would not author |
This is exactly what I want
I've seen other test runners with a reporter that outputs only on failure. I think that would be good here and addresses the issue Joyee raised (CI output is indeed quite long). Writing to stdout is also not free, so that should help with performance too.
I think the
Oof, yes, this is not ideal. |
Co-authored-by: Matteo Collina <[email protected]>
Co-authored-by: Matteo Collina <[email protected]>
My personal experience with |
I absolutely do not want this in the strongest of terms. IMO this an atrocious DX. I do not have a Windows PC, which is the least reliable platform. Whenever there are cross-platform issues, it's always Windows. If CI stops on the first of 50 failures and I have to discover and fix them one by one, with CI taking 4 hours each run? Absolutely. Not. I would never send another PR again. |
These concerns seem very actionable though 🙂 I expect we could even fairly easily facilitate the kill-all-on-first-failure via a label (I think it shouldn't be the default though, and definitely not the only option). For the output noise, I don't know enough about test reporters, but if we can't configure an existing one for a quiet mode, could we fork one to facilitate it? |
This comment was marked as duplicate.
This comment was marked as duplicate.
I think we are talking about local behavior, which is different from the CI, and obviously the settings in the CI is different from the local one, just like when you use the Python runner it produces TAP output in the CI, but a progress bar locally, and it can skip or adapt to the environment as needed/with flags. |
In most cases it is a single test, so it does not matter and even if there are subtests, usually different failures depends on different things that are investigated and fixed separately, so the test is still run multiple times. After months of discussions (#54796, here, several refactoring PRs) I still can't see any good/solid reason in favor of using |
Adds a contributing guideline around the usage of
node:test
intests/
folder. This pull-request is open as a result of last weeks TSC meeting.Potentially unblocks #55716
cc @nodejs/tsc