-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Azure blob storage container is mounted as requested (RWO) for the very first time and read-only subsequently, despite no changes in the configuration. #1598
Comments
the mount process looks good, if you could ssh to the agent node, there is a standalone
|
Hi, here is what I see:
But why are we talking about blobfuse if I use nfs? Just in case, here is the nfs grep:
|
then you could run "mount | grep nfs" to find out the nfs mount on the node, and check whether it's read-only on the node directly |
I resolved the issue. Your comment to inspect the mounts on the node helped me realize the root cause. There is no bug on the driver. Here is what happened. The same storage container is mounted in two different pods running in two different namespaces:
Both pods run on the same k8s node, but the second one runs all the time. So it mounting the storage as ROX wins. I was put off track, by the fact that I have another storage container in exactly the same position as this one and there everything works just fine. But now I understand why it is the case - that storage container is dedicated to a particularly big repo and so the CRON job responsible to refresh it is configured to use a dedicated node pool. So it is guaranteed that the producer pod and the consumer pod run on different nodes and thus their mounts would never interfere with each other. In my case I resolve it easily, because actually the consumer pod does not need to mount the storage in question any more (recent changes in our code), so the fix is trivial - just stop mounting it at the consumer pod. But in general what would be the right solution? It is kind of weird that I cannot mount the same storage container multiple times (but with different mount options) on the same node - why would it be a problem at all? |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
What happened:
I have 33 CRON jobs running daily. Each job mounts its own storage container, refreshes a git repo there and exits until the next scheduled invocation. I recently added another one (34th) and that one behaves weirdly - it is correctly mounted as read-write the very first time. Just like the other 33. But the subsequent mounts of it are all read-only. Needless to say it prevents the job from successfully refreshing the git repo on that storage container.
What you expected to happen:
The storage container is mounted read/write just like all the others that I mount in the same manner.
How to reproduce it:
It reproduces 100% on my cluster. No idea how it can be reproduced by others.
Anything else we need to know?:
???
Environment:
kubectl version
): v1.30.3uname -a
): Linux L-R910LPKW 5.15.153.1-microsoft-standard-WSL2 Add license scan report and status #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 x86_64 x86_64 GNU/LinuxI have extracted the logs for a typical storage container as well as the problematic one from the respective instance of the csi-blob-node DaemonSet. The logs for the first mount as well as for any other storage container look the same with the obvious exception for timestamp, latency, names/uuids of pods/containers (I obfuscated the storage account name):
And here are the "problematic" logs:
Notice how the second attempt to mount the storage container lacks calls to
/csi.v1.Node/NodeStageVolume
and/csi.v1.Node/NodeUnstageVolume
.What is going on?
The text was updated successfully, but these errors were encountered: