-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to manage a rogue client that is not closing connections #4105
Comments
Servers don't see channels. They only see the TCP connections created by the clients. I think you can count the number of connections from each client (by IP). To count number of connections, you can do it by wrapping the listerner and overriding
You are essentially DoS'ed by your clients. So this would be the right thing to do. |
This issue is labeled as requiring an update from the reporter, and no update has been received after 6 days. If no update is provided in the next 7 days, this issue will be automatically closed. |
I face a similar problem. I set keepalive policy as above, expecting to close a client that does nothing but does not close connection, however, after 2 hours, this client stays alive and can send requests normally. |
This issue is labeled as requiring an update from the reporter, and no update has been received after 6 days. If no update is provided in the next 7 days, this issue will be automatically closed. |
Let's track this part of the feature under #4298. |
We have a public gRPC API. We have a client that is consuming our API based on the REST paradigm of creating a connection (channel) for every request. We suspect that they are not closing this channel once the request has been made.
On the server side, everything functions ok for a while, then is seems that something is exhausted. Requests back up on the servers and are not processed - this results in our proxy timing out and sending an unavailable response. Restarting the server fixes the issue.
Unfortunately, it seems that there is no way to monitor what is happening on the server side and prune these connections. We have the following keep alive settings, but they don't appear to have an impact:
Is there any way that we can monitor channel creation and destruction on the server side - if only to prove to the client that their consumption is causing the problems. Verbose logging has not been helpful as it seems to only log the client activity on the server (I.e. the server consuming pub/sub and logging as a client). I have also looked a channelz, but we have mutual TLS auth and I have been unsuccessful in being able to get it to work on our production pods.
We have instructed our client to use a single channel, and if that is not possible, to close the channels that they are creating, but they are a corporate and move very slowly.
The text was updated successfully, but these errors were encountered: