-
-
Notifications
You must be signed in to change notification settings - Fork 487
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Version 8.0.0 rc 9 throws memory leak #2612
Comments
Would you be able to provide a reproduction? 🙏 More infoWhy do I need to provide a reproduction?Reproductions make it possible for us to triage and fix issues quickly with a relatively small team. It helps us discover the source of the problem, and also can reveal assumptions you or we might be making. What will happen?If you've provided a reproduction, we'll remove the label and try to reproduce the issue. If we can, we'll mark it as a bug and prioritise it based on its severity and how many people we think it might affect. If How can I create a reproduction?We have a couple of templates for starting with a minimal reproduction: 👉 Reproduction starter (v8 and higher) A public GitHub repository is also perfect. 👌 Please ensure that the reproduction is as minimal as possible. See more details in our guide. You might also find these other articles interesting and/or helpful: |
It's impossible to replicate our environment... We can provide us our package.json and nuxt.config.ts ?? |
A minimal reproduction would of course help the most, but if you could provide package.json and nuxt config that would narrow down what features and dependencies could cause this 🙏. |
If possible, I would like to find out from which version this is happening. |
Package.json nuxt.config.ts `export default defineNuxtConfig({
})` |
In rc5 works fine! In rc 6 doesnt works I dont test if rc7 and rc 8 fails |
I reproduce and profile the memory leak with a fresh Like @agracia-foticos, I can confirm that there is no memory leak in version rc5. rc6 wouldn't build so I tested rc7, rc8 and rc9, all have the memory leak. I profile the different build by running Here's the snapshots for the rc5 version (memory is stable) Here's the snapshots for the rc7 version (memory is leaking)
|
I can confirm that it's reproducible on rc6 too, as well as rc7, rc8 and rc9. |
The latest edge release contains a fix for the (what was likely the larger) memory leak, please let me know if you can confirm this in your project! Installing it as alias: From my testing it seems like there is still a smaller memory leak present, I'm still working on finding the cause and fixing that. |
I can confirm that in my minimal reproductible test case, the main leak is not present anymore but I still see a smaller one like you. Thanks for the quick correction. |
It seems that the small remaining leak comes from this line: i18n/src/runtime/plugins/i18n.ts Line 93 in 225f1b5
|
@thomaspaillot |
@agracia-foticos If a reproduction isn't possible I would at least like to know where and how you use this module, inside plugins, middleware, whether you use translations inside head tags, inside Pinia and so on. Hopefully we can get this fixed and get v8 stable! 😄 |
Rc11, still persists memory leak I use translation inside useHead I don't use translations in stores, plugins and middleware |
Added useHead here https://github.com/s00d/max-call-err npm i
npm run build
node .output/server/index.mjs
ab -n 1000 -c 100 http://localhost:3000/ after 1000 requests, the RAM usage returned back and consumes less than 1 MB |
Would you be able to provide a reproduction? 🙏 More infoWhy do I need to provide a reproduction?Reproductions make it possible for us to triage and fix issues quickly with a relatively small team. It helps us discover the source of the problem, and also can reveal assumptions you or we might be making. What will happen?If you've provided a reproduction, we'll remove the label and try to reproduce the issue. If we can, we'll mark it as a bug and prioritise it based on its severity and how many people we think it might affect. If How can I create a reproduction?We have a couple of templates for starting with a minimal reproduction: 👉 Reproduction starter (v8 and higher) A public GitHub repository is also perfect. 👌 Please ensure that the reproduction is as minimal as possible. See more details in our guide. You might also find these other articles interesting and/or helpful: |
@agracia-foticos, perhaps your issue is related to using |
I have multiple 'await stores' before computed properties with t() :( |
@BobbieGoede #2629 I think its the same issue, i have multiple t() into computed before await with stores |
Can you check if you're still experiencing the leak using the latest edge release? ( |
@agracia-foticos @szwenni |
I will test it. I'll let you know with news |
When i launch 'npx nuxi build' `Could not resolve module "@intlify/vue-i18n-bridge/lib/index.mjs" 8:30:47 at setupAlias (node_modules/@nuxtjs/i18n/dist/module.mjs:118:13) ERROR Could not resolve module "@intlify/vue-i18n-bridge/lib/index.mjs" ` |
@agracia-foticos |
I test crawling our site with screaming frog This is the graph with 30 simultaneous threads. Memory leak seams to dissapear. |
That's odd, the memory usage should still climb to a certain extent (this mostly depends on the amount of messages being loaded) during requests but lower again after about 10-20~ seconds. I have some changes in mind to improve the overall memory usage (won't necessarily fix leaks) but unfortunately these changes will likely take weeks or months to implement. Do you know what the difference is in method between Screaming Frog and Jmeter? I'm not familiar with these tools but I can't think of a reason for the memory leak to be present with one and not the other. To absolutely ensure you're using the updated dependencies during your tests, you could try installing |
They are similar, but with Jmeter you can determine the crawling speed much better. To measure page loads we better use Jmeter. I will try to install implicitly vue-i18n@^9.9.0 with edge @nuxtjs/i18n and I'll tell you news |
@agracia-foticos |
Sorry, our project is a private ecommerce and it is not possible to give access to personnel outside the company. |
Ah I see.
To get an idea of what could be leaking I use Chrome devtools to take heap snapshots before and after the leak and compare these, you can read on how to do this here: https://nodejs.org/en/guides/diagnostics/memory/using-heap-snapshot. Finding what exactly is leaking is still not very straightforward, if I see something that could hint to a certain feature leaking I try and trigger the leak in a minimal reproduction to confirm and further narrow down the leak. Previously it was the messages leaking (it could possibly still be in certain configs), so using a large locale file with varying message types (arrays, nested keys) made it easier to see if a leak was triggered (the locale file in this reproduction for example: https://github.com/BobbieGoede/i18n-memory-leak). There's probably other ways to debug memory leaks in a node process that I'm not aware of, but maybe this will help you identify the issue, let me know if you have any questions! |
Same issue here as we upgraded from Running on AWS EC2 with node18.19.0 We'll try to update the rc version step by step to maybe confirm where it breaks. |
@dreitzner |
@BobbieGoede I created an internal ticket. I'll follow up here when I have more infos 😉 |
I ran a quick load test locally:
Test setup with
Test results with version:
Console output
|
Ah that's unfortunate to hear, I'm assuming you are using |
@BobbieGoede I updated the test results. Unfortunatly I can't share the repo, but I'd be more than happy to set up a call with you and do live debugging. Feel free to reach out:
|
¡¡VERSION 8.0.1 works fine!! |
I close the issue! Thank you very much! |
Environment
Linux
v20.5.1
3.8.2
3.10.0
2.8.1
[email protected]
-
-
-
-
Reproduction
This graph its our application with i18n 8.0.0 rc 9, with one screaming frog crawl with 20 simultaneous threads and one Jmeter with 100 threads simultaneously.
The memory raises and the garbage doesnt recollect the memory (The drop that is seen is because we restart the server automatically when we reach a certain threshold)
The same situation (the same packages and config), but with i18n 8.0.0 rc 5 (only we change the version of this library)... the memory is stable.
Describe the bug
This graph its our application with i18n 8.0.0 rc 9, with one screaming frog crawl with 20 simultaneous threads and one Jmeter with 100 threads simultaneously.
The memory raises and the garbage doesnt recollect the memory (The drop that is seen is because we restart the server automatically when we reach a certain threshold)
The same situation (the same packages and config), but with i18n 8.0.0 rc 5 (only we change the version of this library)... the memory is stable.
Additional context
No response
Logs
No response
The text was updated successfully, but these errors were encountered: