-
Notifications
You must be signed in to change notification settings - Fork 150
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
allow to specify wait time for attach disk operation #956
Conversation
Welcome @sagor999! |
Hi @sagor999. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/assign @saad-ali |
/ok-to-test |
/lgtm Thanks for doing this, this is something I've been mulling in general. PD 99% latencies seem to be in the ~10s range and tail latencies can be 1-2m, so this won't make a big difference in the tail where we're usually concerned, but I have been wondering if there's an affect of op or attach times of a few seconds and just missing the first poll, doubling that attach time. Anyway it hasn't been a priority since the long tail is more important for us but it will be useful to have some knobs and this seems as good a way to do it as any. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: mattcary, sagor999 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@sagor999 I hope you don't mind me digging this out, but were you actually able to achieve a subsecond atachment? Experimentally I wasn't able to go below roughly a 3 second threshold, even outside of GKE, i.e. with just a VM, PD and a CLI attach-disk command. |
I don't think you're going to achieve reliable sub-second attach time, at least according to our fleet numbers. P50 is still seconds. I think the best way to achieve fast "attach" time currently on GCE PD is to use shared PD, attaching it to the two VMs you want to swap between in advance, and do a logical attach in the data plane. This use case isn't supported by this CSI driver. I'd do it by creating a new CSI driver.
Dealing with node failure might be tricky. In theory one could attach the PD to a new node, but I don't know how updating a PV's topology works. Also note that practically what this is doing is make PD operate like and NFS mount in that the "attach" is really done at mount time. So if fast attach is necessary some NFS solution like filestore might actually work better than PD. |
@mattcary thanks for the input. Are these metrics you're referring to available for the broader public anywhere? |
What type of PR is this?
/kind feature
What this PR does / why we need it:
Currently poll time for attach disk or global\regional\zonal operation is hard coded:
gcp-compute-persistent-disk-csi-driver/pkg/gce-cloud-provider/compute/gce-compute.go
Line 781 in 9681ab1
For example, for attach disk it is hard coded to wait at least 5 seconds before checking if disk was attached.
For our project we need to be able to attach disk as fast as possible, and usually it is attached extremely fast (less than 1 second). So we need to be able to poll faster.
This PR allows to do that via providing parameters as command line arguments that allow you to change how often we poll for changes.
Default parameters remain the same (for example AttachDisk is still polled every 5 seconds with timeout of 2 minutes as before), so existing installations will not notice any change.
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?: