-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MTU-related severe performance issues #284
Comments
I just wanted to add that I have severe performance issues with the default MTU of 65520, too. Running on a Windows Host with a Linux Guest VM (Ubuntu 20.04) and running Iperf3 from a container running on the VM and connecting to the Host: EDIT: For completeness here the stats when connecting from the host to the container with port-driver slirp4netns. No MTU dependent slowdown here. EDIT again: For the tests I was using Docker Rootless v20.10.12. |
Can verify the above. In a rootless Docker environment using nginx 1.20 to proxy internal containers, we were seeing many of the requests to nginx take 10+ seconds while requests directly to the proxied services took less than a second. MTU on the Docker daemon was set to 65520. Reducing MTU to 48000 fixed this issue. Using Ubuntu 20.04.2 |
Same issue. Using MTU=1500 gives ~1Gbit/s These are iperf results between the container and another server in the same subnet. |
This may be the same thing as #128 , as that also has a 5 second delay at the beginning, but I can't tell and I have a lot of detail so I didn't want to clutter it up.
Short version: in a rootless podman with default everything (i.e. slirp4netns with default MTU of 65520), curl of a file with a size above the MTU takes 5 seconds when it should be much much less than a second. Reducing the MTU fixes it.
My environment:
Repro:
Dockerfile:
Run
podman build -t slirptest .
In another window on the same host (maybe in a temp dir):
In the another window:
Note that it pauses for about 5 seconds after the first chunk of data.
Then try:
500x performance difference. :D
Also:
So the 64k file causes the problem but the 63k file does not.
In case it's relevant, here's the host's mtu configs:
The communication in question appears, to tcpdump, to come over
lo
.My binary search shows that the issue doesn't occur at mtu=48000 and lower, but does occur at mtu=48500 and higher. I have no idea what the significance of that is.
The text was updated successfully, but these errors were encountered: