-
-
Notifications
You must be signed in to change notification settings - Fork 345
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High-level helpers for working with unix domain sockets #279
Comments
If we do do unlink on close, then it's important to note that So one option would be |
I think the locking algorithm in #73 (comment) will work, but it's super annoying. So I'm leaning towards: start by implementing a version where we never clean up stale socket files, and then if/when someone complains, implement the full locking version, so we can keep the same API. [Edit: or maybe not... asyncio doesn't clean up stale socket files, and uvloop originally did. From this we learned that starting to clean up socket files when you didn't before is a backcompat-breaking change. So we should probably pick one option and stick to it.] Other libraries, in their "try to blow away an existing socket file first" code, check to see if it's a socket and only unlink it if so. We aren't going to unlink the file, but we might still want to do a just-in-case check where if the target file exists and is not a socket, then error out instead of renaming over it. |
Hi, has this had any work at all? If not, I'd like to take an attempt at making it. I'm working on converting one of my libraries to support curio and trio, and this is a missing part for a bit of it for the trio side. |
Are you looking to implement helpers for clients, servers, or both? The
client part should be pretty straightforward; the server part is more
complicated and requires some subtle decisions that I've been
procrastinating on :-). Either way, there's no one currently working on
this, and if you want to go for it then that'd be awesome.
…On Dec 16, 2017 2:51 PM, "Laura F. D." ***@***.***> wrote:
Hi, has this had any work at all? If not, I'd like to take an attempt at
making it. I'm working on converting one of my libraries to support curio
and trio, and this is a missing part for a bit of it for the trio side.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#279 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAlOaCQ7Fo1GgbaiXxRIVWOtscUuQohLks5tBElhgaJpZM4O0W4_>
.
|
I was looking at implementing both, but if you're waiting on design decisions I only need the client part so I can just restrict it to that for now. |
Unix client socket support. See: #279
[Just discovered this half-written response that's apparently been lurking in a tab for like 3 months... whoops. I guess I'll post anyway, since it does have some useful summary of the issues, even if I didn't get around to making any conclusions...] @SunDwarf Well, let me brain dump a bit about the issues, and maybe it'll lead to some conclusion, and maybe you'll find it interesting in any case :-) The tricky parts are in starting and stopping a unix domain listening socket, and there are two issues that end up interacting in a complicated way. First issue: the ideal way to set up a unix domain listening socket is to (a) Second issue: tearing down a socket. Unix domain sockets have really annoying semantics here: when you close a listening socket, it leaves behind a useless stray file in the filesystem. You can't re-open it or anything; it's just a random piece of litter. One consequence is that when creating a listening socket, you might need to clean up the old socket; the consensus seems to be that you should just go ahead and silently do this. The Now here's the problem: if we want to support zero-downtime upgrades, we might be replacing the old server's socket with the new server's socket. If this happens just as the old server is shutting down, and the old server has code to clean up the socket file when it shuts down, then if the timing works out just right it might accidentally end up deleting the new socket file. This puts us in an difficult position: zero-downtime upgrades are a feature not many people need, but that's irreplaceable when you need it; OTOH not cleaning up old sockets is a low-grade annoyance for everyone, including lots of people who don't care about zero-downtime upgrades. And for bonus fun, we don't still don't have a full solution for zero-downtime upgrades currently, because we don't have a good way to do graceful shutdown of a server (meaning: drain the listening socket and then close it). See #14, #147. We have a few options:
Bonus: it's not trivial to switch between these options later. For example, here's a bug filed on uvloop where code was breaking because asyncio doesn't clean up unix domain listening sockets, so someone wrote code that assumes this and tries to clean up manually, and uvloop did clean them up, so the manual cleanup code crashed. There are some things we could do to mitigate this, like put a giant warning in the docs saying that you shouldn't make assumptions about whether the stale socket gets left behind. But who knows if people would listen. Quite a mess, huh? |
Now that the latest versions of Windows support AF_UNIX, I guess we'll have to take that into account as well. According to some random SO answer, there is a way to get atomic rename on recent Windows (AFAICT "Windows 10 1607" is well before the AF_UNIX work): https://stackoverflow.com/a/51737582 [Edit: @pquentin found the docs! https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/content/ntifs/ns-ntifs-_file_rename_information ...From this, it's actually not clear whether it is atomic?] Unfortunately I won't believe we got the Windows details right without testing them (e.g. how does Windows handle a renamed AF_UNIX socket?), and I don't know Appveyor has a new enough version of Windows to let us test them yet. |
Is there any way to find out if the socket being closed is the final copy of that socket, or alternatively to keep track of copies created by fork etc? The socket should not be unlinked as long as there are other copies still listening. I'm using socket mtime to track if it has been replaced by another instance. There are a few race conditions (identical timestamps for sockets created at the same time, or someone getting in between checking of mtime and unlinking because I don't bother to use locks). |
That's easy. Try to connect to it after closing your listener. |
Closing still needs work and I have to work on something else. Here's the work so far: class UnixSocketListener(trio.SocketListener):
def __init__(self, sock, path, inode):
self.path, self.inode = path, inode
super().__init__(trio.socket.from_stdlib_socket(sock))
@staticmethod
def _create(path, mode, backlog):
if os.path.exists(path) and not stat.S_ISSOCK(os.stat(path).st_mode):
raise FileExistsError(f"Existing file is not a socket: {path}")
sock = socket.socket(socket.AF_UNIX)
try:
# Using umask prevents others tampering with the socket during creation.
# Unfortunately it also might affect other threads and signal handlers.
tmp_path = f"{path}.{uuid4().hex[:8]}"
old_mask = os.umask(0o777)
try:
sock.bind(tmp_path)
finally:
os.umask(old_mask)
try:
inode = os.stat(tmp_path).st_ino
os.chmod(tmp_path, mode) # os.fchmod doesn't work on sockets on MacOS
sock.listen(backlog)
os.rename(tmp_path, path)
except:
os.unlink(tmp_path)
raise
except:
sock.close()
raise
return UnixSocketListener(sock, path, inode)
@staticmethod
async def create(path, *, mode=0o666, backlog=None):
return await trio.to_thread.run_sync(
UnixSocketListener._create, path, mode, backlog or 0xFFFF
)
def _close(self):
try:
# Test connection
s = socket.socket(socket.AF_UNIX)
try:
s.connect(self.path)
except ConnectionRefusedError:
if self.inode == os.stat(self.path).st_ino:
os.unlink(self.path)
finally:
s.close()
except Exception:
pass
async def aclose(self):
with trio.fail_after(1) as cleanup:
cleanup.shield = True
await super().aclose()
self._close()
async def open_unix_listeners(path, *, mode=0o666, backlog=None):
return [await UnixSocketListener.create(path, mode=mode, backlog=backlog)] The test connection trick doesn't work perfectly (closing races, I think), and in this version the cleanup is blocking. For some reason, trio.to_thread.run_sync fails with |
Minor irritation with atomic rename approach: the original random socket name also appears in peername seen by clients. |
Maybe it's simpler to just offer users a way to disable auto-deletion? (Assuming we support auto-deletion at all...)
Oh, huh, good point. And we'll also see it if we call We should add some API to Of course, that doesn't help for other non-Trio programs connecting to Trio-created Unix-domain sockets, but maybe that's not a big deal.
Yeah, I don't think we can afford to use We do need to think carefully about race conditions here though... imagine this code gets run by root, to bind a socket in a globally writable directory like
|
It is certainly difficult to do right. The test connection might also have side effects on the accepting end (e.g. application logging messages about failed connection handshakes). And if enabled by default without sufficient checks, multi-processing applications would get very nasty bugs (socket unlinked whenever the first worker terminates / is restarted). Fortunately there is a simple and fairly safe solution. Never automatically unlink, but provide a utility function for unlinking. A multi-processing application could choose when to call this function (e.g. when it initiates shutdown, even before the workers actually terminate), and the socket would still internally check that the inode number matches. This leaves a race condition between stat and unlink, between which another instance might replace the socket, which I can imagine being an issue with restarts like
Considering unlink + bind without rename, isn't the downtime there extremely minimal? Any connections in old server's backlog stay there and a new socket is bound and listened within microseconds. Hardly any connections should appear within such window, and network clients have to have retrying logic in any case. In particular, unlink/bind could work as a fallback on Windows if true atomic replacement is not possible. |
I don't think I'm as worried about this as you are. Sharing a listening socket between processes isn't something you do by accident – it requires some careful work to get everything shared and set up properly. Even after we add an When choosing the defaults, we want to put more weight on doing something sensible that works for less sophisticated users, who don't know enough to pick their own settings.
Sure, but "definitely no connections are lost" is still better than "probably not many connections are lost" :-). The rename part is cheap and easy, and the only downside is the weird |
Actually it is quite simple to do with the high level API (probably not using Trio as planned, again):
Also, I suppose that sharing a listener object actually is something quite easily done by accident, even if the socket is not inheritable, by some completely unrelated fork() -- but that would have bigger issues: trying to close a fd that doesn't belong to the process. |
( Sharing trio objects across multiple calls to
Much bigger issues. If you try to |
[sorry, I fat-fingered that and posted a partial comment. I've reopened and edited my comment above to fill in the missing bits.] |
Dang! Looks like I accidentally mixed things from Nim language (where it is
I have to admit that I tried forking in async context first. As you can imagine, it didn't end well. Maybe should have used exec or trio processes, but I haven't found a convenient way to call Python code of the currently running program (determining how the program was started and trying to run a new interpreter etc. gets complicated). Sanic is using a similar approach with asyncio -- calling Btw, I have to admire the eye you have for detail. I've read your blogs about KeyboardInterrupt/signals and Cancellation, and I see the same attitude here with getting the UNIX sockets precisely right. Actually I originally found Trio when looking for information about cancellation in asyncio and was astonished by how the entire topic was apparently being ignored by everyone else. Needless to say, I was instantly convinced to switch and seeing bits of that kind of quality everywhere I look in Trio is certainly satisfying (the "little" details like making sure that SSL server_hostname actually gets verified, or making sockets NODELAY by default -- details unnoticed by most but they make a big difference as a whole). Keep up the good work! :) |
Unfortunately I don't think there's another reliable way option... you can fork before starting any async code (and possibly pass the sockets over to the child processes later), but even then you'll probably find yourself having to support spawning new processes to support Windows. I'm pretty sure that if you don't change directories, don't change the environment, and run
Unfortunately, it's not supported in Trio. It creates huge complications in asyncio because when
Ah, thank you! |
(a quicker response that I'd like, and quite off topic too)
This makes me think that the sane solution is to accept that main is getting run (including the Fortunately I believe I can restructure the problem so that the first process starts normally but ends up opening the listening sockets, and then re-starting the whole program as child processes that also end up in the same place -- but instead of opening sockets, receive them pickled (and on Windows duplicated as is required) from the original one. This has the benefit that all state other than the socket is initialized in normal way in each child process, and won't need to be pickled at all. |
Now that the high level networking API is coming together, we should also add unix domain helpers. Should be pretty straightforward.
Some notes here: #73 (comment)
I guess something like:
mode=0o666
matches what twisted does; tornado does0o600
. Should research which is better as a default.The biggest issue is to figure out what to do about unlink-on-close. It's nice to keep thing tidy, but it introduces a race condition if you're trying to do no-downtime upgrades...
The text was updated successfully, but these errors were encountered: