-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support SubConn idleness when connections are lost #4298
Comments
This includes when a GOAWAY is received. The client should stay IDLE until a new RPC is started. |
This comment has been minimized.
This comment has been minimized.
I jump this issue from #4282 PLMK if I understand it incorrectly. Thanks! |
No, the old connection that received GOAWAY is The current behavior is, the new connection will be |
Any updates on this feature ? @dfawley In our setup I have a gut feeling this is whats causing the memory leak. client is Kubernetes api-server and the grpc-server runs as a proxy(apiserver-network-proxy) on the cluster as a pod, acting as a bridge between control plane and worker nodes. We have a situation where we are seeing memory leaks but not able to identify the root cause of the issue. For some reason setting MaxConnectionIdle doesn't work. However setting Could it possibly be the case where we are hitting this issue ? |
This was implemented in #4613. gRPC will now not reconnect until an RPC is attempted if a connection is lost. It's unlikely that this kind of thing would fix memory leak issues, although it will make gRPC consume fewer resources when a Note that |
Closing issue since the feature it describes is implemented by #4613. |
C and Java both have this behavior. Go does not, and it can cause problems where idle clients are actively reconnecting to backends they don't need to use. Note that if a round robin LB policy is active, it will still maintain persistent connections to all backends, which is by design. Pick first, however, will stay disconnected until the next RPC.
The text was updated successfully, but these errors were encountered: