-
Notifications
You must be signed in to change notification settings - Fork 271
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Django Channels memory leaks on opening new connections #373
Comments
Thank you for your work on this so far, @yuriymironov96. I've submitted a PR to your sample project and what seems to me like at least a naive work-around that either fully eliminated or at least drastically reduced the impact of this memory leak in my testing on this and another project that I've been working on. That work-around is at yuriymironov96/django-channels-leak@ec0a7da and is exceptionally "knowy" as it alters state of a different module's object. I hope it will be helpful for folks who know these projects better than I to either tell me why I'm wrong, at least; or at best help drive a discussion toward a sustainable fix for the issue. |
@mitgr81 I have a question: which parts of PR are essential for the fix to work? I did a quick check:
Result:
I will continue researching your solution tomorrow, but maybe you can already point out what I might be missing? |
@yuriymironov96 I think you're hitting the reason I needed to make alterations to |
Also for what it's worth I wasn't directly testing for memory getting released on socket close, but I had altered the sample consumer to disconnect and reconnect after a short time; and I was looking to make sure memory growth was capped. Probably ultimately close to the same end result, but slightly different approach. |
@mitgr81 I have another question: are changes made to As for testing without changes to client-side: having changed |
Moreover, it looks like there is one more memory leak that concerns volume of messages transferred through sockets. Having generated 500-paragraph Lorem Ipsum message and sent it 5 times (2 tabs open), application has claimed about 1MB of memory and never released it. |
They're not, that was just to help accelerate the test in my use case.
That matches my findings. It appears that that reduced the magnitude of the leak; but not eliminated it entirely.
Seems like a good place to keep digging; I appreciate the collaboration! |
I was able to replicate memory leak. It is much more pronounced than I expected. My observations:
So perhaps the issue is related to Daphne? |
I could see that. We have swapped from running purely Daphne in our environments to round-robin Daphne/unicorn deployments. Anecdotally it’s been better, but we also shipped with a patch around this issue at the same time. |
After a week of usage of Uvicorn instead Daphne in production environment, I'm now fully convinced that this leak is related to Daphne, not channels. In my use case, results are significant: Not only that, the switch to Uvicorn proved to have significant effect on CPU usage as well. Its lower than that of Daphne. As a result, we have not only addressed memory leak, but also improved website performance as well. |
OK, let's move this over to Daphne. Thanks @revoteon |
any movement on this? been having memory consumption crashes and have narrowed things down in my use case to it being very likely this. it fits like a glove EDIT: I was able to resolve my leak by reading updated docs, getting good, and fixing my code. I wasn't cleaning this up with a
|
same here, it works well with python3.6, but when I upgrade to python3.8, Daphne began memory leak. I have these packages installed: |
We ended up here as we were looking for the source of memory leaks. We are currently using daphne on python3.11 docker x86 image. Daphne on arm seems to be fine though. |
Is this still unresolved? I'm trying Daphne for the firs time and noticed my memory usage in Docker goes from 4gb to 6gb+ and invariably crashes Docker. |
Hello @BSVogler 👋 Did you manage to find the root cause of that issue? I think I might be running into the exact same situation after Python 3.11 upgrade. |
Hello, |
no, we switched over to hypercorn. |
My issue appears to have gone away. Not sure how or why. |
It would be good if someone could provide a minimal reproduction, just involving Daphne and a simple application, and not e.g. Docker &co. Likely it's an issue in the application but it's impossible to tell without anything (small) to reason about. |
@BSVogler Thanks a lot for the update 🙇 @carltongibson I'm currently trying to nail down the issue that we're encountering. If I manage to pin it down to specific behavior of |
@pgrzesik Thank you for taking this up! |
Hey 👋 After digging deeper into it, it seems like the issue on our end is related to some weird interaction between @aaditya-ridecell What Python version are you using? |
Thanks for the effort @pgrzesik -- even a partial result helps narrow things down. 🎁 |
Thanks for looking into this @pgrzesik. We are using Python 3.8 and don't use the ddtrace library. |
Hey, any updates guys? I'm experiencing the same thing. How did you guys fix the issue? |
Hey, is there any update? |
Hello! First of all, this is a great project and I wanted to thank all the maintainers for keeping it amazing!
For last two months, django-channels websockets on our production service have been periodically failing with OOM. Having researched it, it looks like a django-channels memory leak.
I think it may be related to opening new channel connections and then improperly closing them in channels_layer object. Opening multiple browser tabs leading to a single channels group increases memory usage, and closing them does not release the memory (the disconnect occurs though).
I have also managed to reproduce it on a simple project with minimal dependencies and steps to reproduce, so please feel free to check it out: https://github.com/yuriymironov96/django-channels-leak. This is a project based on django-channels tutorial.
Here are dependencies of the sample project:
I have tries both
channels_redis
andchannels_rabbitmq
, multiple servers (django debug server,daphne
,uvicorn
) and the issue still persists.The sample benchmarks are:
It may look like a minor leak at this rate, but it is scales quickly and occurs frequently due to high load of our application.
Could you please have a look at it and share any ideas you have?
The text was updated successfully, but these errors were encountered: