You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 22, 2024. It is now read-only.
We have deployment failure case where node-driver-registrar sidecar on node fails to connect to /csi/sock.
This is Clearlinux distro on HW-based host with NVDIMM memory.
Clear version is 28880, kernel is 5.0.7, and container runtime is CRIO.
With node-driver-registrar v1.0.2, we see repeated 60 second connecting timeouts which lead to exit of pod, restart, and CrashLoop.
We also tried to switch to v1.1.0 where meaningful change is "try connecting without timeout",
there the picture changes, sidecar container does not exits but remains running, trying, but does not succeed. BTW, this "don't exit" behavior kind hides the real problem (proposal is to create node-driver-registrar feature request to implement some more intelligent solution, like readiness probe: #248 (comment)).
We found issue with similar symptoms and suspected SELinux involvement, but there is no SELinux.
The newest evidence from running strace of node-driver-registrar shows this:
socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 3<UNIX:[67224]>
setsockopt(3<UNIX:[67224]>, SOL_SOCKET, SO_BROADCAST, [1], 4) = 0
connect(3<UNIX:[67224]>, {sa_family=AF_UNIX, sun_path="/csi/csi.sock"}, 16) = -1 ENOENT (No such file or directory)
Means, this is different from kubernetes-csi/node-driver-registrar#36
We have lot of evidence from Clear-based deployment (using 'make start' cluster of VMs) but most of times such is created as docker-based.
Does CRIO-based runtime cause some difference in how local socket is passed into container?
The text was updated successfully, but these errors were encountered:
it was a different issue, a misconfig that led to node driver failure to connect to TCP socket of kubelet, so it did not even create the /csi/csi.sock.
node-driver-registrar was correct, there was no socket.
We have deployment failure case where node-driver-registrar sidecar on node fails to connect to /csi/sock.
This is Clearlinux distro on HW-based host with NVDIMM memory.
Clear version is 28880, kernel is 5.0.7, and container runtime is CRIO.
With node-driver-registrar v1.0.2, we see repeated 60 second connecting timeouts which lead to exit of pod, restart, and CrashLoop.
We also tried to switch to v1.1.0 where meaningful change is "try connecting without timeout",
there the picture changes, sidecar container does not exits but remains running, trying, but does not succeed. BTW, this "don't exit" behavior kind hides the real problem (proposal is to create node-driver-registrar feature request to implement some more intelligent solution, like readiness probe: #248 (comment)).
We found issue with similar symptoms and suspected SELinux involvement, but there is no SELinux.
The newest evidence from running strace of node-driver-registrar shows this:
Means, this is different from kubernetes-csi/node-driver-registrar#36
We have lot of evidence from Clear-based deployment (using 'make start' cluster of VMs) but most of times such is created as docker-based.
Does CRIO-based runtime cause some difference in how local socket is passed into container?
The text was updated successfully, but these errors were encountered: