-
-
Notifications
You must be signed in to change notification settings - Fork 30.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
multiprocessing.Queue
: Exceeding a certain amount of bytes in the queue prevents proper exit
#128186
Comments
What is the most recent version with the issue? 65515 works for me in IDLE on Win10 with 3.12.8 and 3.14.0a1. |
On Ubuntu (22.04) and Macos (15.1.1) I have not identified a version not having the issue anymore. The latest I have tested on Ubuntu is 3.12.1 and on Macos it's 3.14.0a0 and they both have the issue. Interesting that you don't have it on Win10! Waiting for more linux/macos users to confirm that I'm not crazy 🙏🏻 |
Confirmed it on Linux, for both 3.12 and main. FWIW, the script does print out pstack is showing that we're getting stuck waiting on a semaphore somewhere while joining a thread. I'll investigate. |
Makes sense @ZeroIntensity, for your investigation note that it appeared between 3.9.6 (still ok) and 3.9.18 (has the issue). |
3.9 - 3.11 are security-only branches and this bug wouldn't categorize as a security issue IMO (if you were talking about the labels; for bisecting commits, using the main branch is fine) |
I do think this is possibly a security issue. It looks like this applies to any iterable greater than 65514, so if user input was passed to |
(It could be a DOS but you should consider the possible attack vectors first IMO) |
(dfb1b9da8a4becaeaed3d9cffcaac41bcaf746f4 looks in the right period and touches the queue's closing logic.) |
multiprocessing.Queue
: When a certain amount of bytes is put into the queue, it prevents the script from exiting.multiprocessing.Queue
: Exceeding a certain amount of bytes in the queue prevents proper exit
Upon further investigation, this looks unrelated to For example: import os
read, write = os.pipe()
my_str = b"0" * 65536
os.write(write, my_str) # Passed, but buffer is now full
os.write(write, b"1") # Stuck! This happens in C as well, so I doubt there's something we can do about that: #include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
int main() {
int fds[2];
pipe(fds);
char *str = malloc(65536);
memset(str, '0', 65536);
write(fds[1], str, 65536);
puts("Filled the buffer");
write(fds[1], "1", 1);
puts("We'll never get here");
return 0;
} This is documented, though. From Wikipedia:
I guess there's three options:
@picnixz, what do you think the way to go would be? |
(1) seems the most conservative and the less error-prone and hard for us; we should document it in An alternative would be to have multiple pipes. If you've filled a pipe, then you create another one (so you yourself have a queue of pipes... though this is only an idea I'm not even sure would be efficient and not even sure would have a use case). We can ask @gpshead for that (sorry Gregory for all the mentions today but today seems the multiprocessing/threading/pipes issues day!) |
Yeah, that would be (3) in my comment. The question comes down to what kind of maintenance burden that will have. |
Running the file in CommandPrompt I see a hard hang (must close window) in 3.12 and 3.13 but not in 3.14.0a1 & 3 (get prompt after running). |
Bug report
Bug description:
This terminates properly:
while this prints 'end' and is then stuck:
CPython versions tested on:
Latest identified version without the issue: 3.9.6
Oldest identified version with the issue: 3.9.18
Operating systems tested on:
Linux, macOS
The text was updated successfully, but these errors were encountered: