You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently we are using L4 TCP proxy, which does not offer load balancing at all. Therefore, we transitioned to using gRPC protocol and also tested with HTTP/2.
However, while testing and debugging with access logs, we couldn't connect the services together due to Envoy giving http2.too_many_headers error.
Here's a piece of our access log:
2024-03-14T03:49:29.332Z+00:00 [debug] envoy.router(30) [C263][S16574892551486916886] cluster 'local_app' match
2024-03-14T03:49:29.332Z+00:00 [debug] envoy.router(30) [C263][S16574892551486916886] router decoding headers:
':method', 'POST'
':scheme', 'http'
':path', '/api.iass.portal.user.PortalUserService/xxx'
':authority', 'sb-iaas-vertix-middleware-user:5000'
'content-type', 'application/grpc'
'user-agent', 'grpc-go/1.61.0'
'te', 'trailers'
'grpc-accept-encoding', 'gzip'
'traceparent', '00-e2fef2a58e74b525bd5fa9d67c390040-598e8613982231df-01'
'x-forwarded-for', '10.3.15.41,10.3.15.41, 192.168.120.0'
'authorization', 'Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTlmYWRmNDA5ZWRkNmQzOTZlMDI5ZmY0NTlmOTQ4NDdjMjQzNDIyM2ZhZjA0ZTRhZjNhNzRjZTlmsjgwLCJ1aWQiOiJpYWFzLnBvcnRhbC51c3IuYjk3OTMyZDFlMzQzNDBmZmE0M2UzYmNmYTQ4NzIwZWYifQ.yL1c_vGHKgPHYQu6W57cHm-kkfLLIP8SmJrg3zcnCVo'
'x-forwarded-host', 'sb-console.infiniband.vn'
'http-method-fwd', 'GET'
'stepctx', '{"method":"/api.iass.portal.user.PortalUserService/GetUser","request":"b020487da29a4031981e4bfd5bec1015"}'
'x-forwarded-proto', 'http'
'x-request-id', 'cf37f903-840e-4387-aca8-b36ede78aeed'
'x-forwarded-client-cert', 'By=spiffe://f1bd3171-86c9-4504-c5e3-00c1ffe1bf67.consul/ns/default/dc/dc-vertix-consul-mesh/svc/sb-iaas-vertix-middleware-user;Hash=cf3c025d96a049964e923fa53a1af666b715406b34fe342101cf0443fa427be6;Cert="-----BEGIN%20CERTIFICATE-----%0AMIICQzCCAeqgAwIBAgICEBcwCgYIKoZIzj0EAwIwMTEvMC0GA1UEAxMmcHJpLTFz%0AOGk5dmIxLmNvbnN1bC5jYS5%0AWhcNMjQwMzE2MDY0MTIxWjAAMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE3aSI%0Aomm%2FlMSjiXADU2nJil8Tr6Mb6RVb8hwS0tIBUPEwXXXW8IESDR%2BeV7sTz4uZrI73%0AAKfjihlWM%2BmtNMTXGaOCASEwggEdMA4GA1UdDwEB%2FwQEAwIDuDAdBgNVHSUEFjAU%0ABggrBgEFBQcDAgYIKwYBBQUHAwEwDAYDVR0TAQH%2FBAIwADApBgNVHQ4EIgQgHCUg%0AFnAb2MuWAqrhXUNAmeu2zJyW3qGA6TElsyY%2FezQwKwYDVR0jBCQwIoAgE4VoZTfI%0AeD381gQGOvbCIcLq9vM3YtsSshBGTBnravgwgYUGA1UdEQEB%2FwR7MHmGd3NwaWZm%0AZTovL2YxYmQzMTcxLTg2YzktNDUwNC1jNWUzLTAwYzFmZmUxYmY2Ny5jb25zdWwv%0AbnMvZGVmYXVsdC9kYy9kYy12ZXJ0aXgtY29uc3VsLW1lc2gvc3ZjL3NiLWlhYXMt%0AdmVydGl4LWFwaS1nYXRld2F5MAoGCCqGSM49BAMCA0cAMEQCIG9aKUyEIhAlmqVF%0Axk9tq37FvF%2BzaORQ44TGRJen8Y4LAiAm8AdS52EX%2B05YL8lh18xJcETQoBMwBS5%2B%0AOihBOtN7rw%3D%3D%0A-----END%20CERTIFICATE-----%0A";Chain="-----BEGIN%20CERTIFICATE-----%0AMIICQzCCAeqgAwIBAgICEBcwCgYIKoZIzj0EAwIwMTEvMC0GA1UEAxMmcHJpLTFz%0AOGk5dmIxLmNvbnN1bC5jYS5mMWJkMzE3MS5jb25zdWwwHhcNMjQwMzEzMDY0MTIx%0AWhcNMjQwMzE2MDY0MTIxWjAAMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE3aSI%0Aomm%2FlMSjiXADU2nJil8Tr6Mb6RVb8hwS0tIBUPEwXXXW8IESDR%2BeV7sTz4uZrI73%0AAKfjihlWM%2BmtNMTXGaOCASEwggEdMA4GA1UdDwEB%2FwQEAwIDuDAdBgNVHSUEFjAU%0ABggrBgEFBQcDAgYIKwYBBQUHAwEwDAYDVR0TAQH%2FBAIwADApBgNVHQ4EIgQgHCUg%0AFnAb2MuWAqrhXUNAmeu2zJyW3qGA6TElsyY%2FezQwKwYDVR0jBCQwIoAgE4VoZTfI%0AeD381gQGOvbCIcLq9vM3YtsSshBGTBnravgwgYUGA1UdEQEB%2FwR7MHmGd3NwaWZm%0AZTovL2YxYmQzMTcxLTg2YzktNDUwNC1jNWUzLTAwYzFmZmUxYmY2Ny5jb25zdWwv%0AbnMvZGVmYXVsdC9kYy9kYy12ZXJ0aXgtY29uc3VsLW1lc2gvc3ZjL3NiLWlhYXMt%0AdmVydGl4LWFwaS1nYXRld2F5MAoGCCqGSM49BAMCA0cAMEQCIG9aKUyEIhAlmqVF%0Axk9tq37FvF%2BzaORQ44TGRJen8Y4LAiAm8AdS52EX%2B05YL8lh18xJcETQoBMwBS5%2B%0AOihBOtN7rw%3D%3D%0A-----END%20CERTIFICATE-----%0A";Subject="";URI=spiffe://f1bd3171-86c9-4504-c5e3-00c1ffe1bf67.consul/ns/default/dc/dc-vertix-consul-mesh/svc/sb-iaas-vertix-api-gateway'
'x-envoy-expected-rq-timeout-ms', '15000'
2024-03-14T03:49:29.332Z+00:00 [debug] envoy.pool(30) queueing stream due to no available connections (ready=0 busy=0 connecting=0)
2024-03-14T03:49:29.332Z+00:00 [debug] envoy.pool(30) trying to create new connection
2024-03-14T03:49:29.332Z+00:00 [debug] envoy.pool(30) creating a new connection (connecting=0)
2024-03-14T03:49:29.337Z+00:00 [debug] envoy.http2(30) [C264] updating connection-level initial window size to 268435456
2024-03-14T03:49:29.337Z+00:00 [debug] envoy.connection(30) [C264] current connecting state: true
2024-03-14T03:49:29.337Z+00:00 [debug] envoy.client(30) [C264] connecting
2024-03-14T03:49:29.337Z+00:00 [debug] envoy.connection(30) [C264] connecting to 127.0.0.1:5000
2024-03-14T03:49:29.337Z+00:00 [debug] envoy.connection(30) [C264] connection in progress
2024-03-14T03:49:29.337Z+00:00 [debug] envoy.http(30) [C263][S16574892551486916886] request end stream
2024-03-14T03:49:29.337Z+00:00 [debug] envoy.connection(30) [C264] connected
2024-03-14T03:49:29.337Z+00:00 [debug] envoy.client(30) [C264] connected
2024-03-14T03:49:29.337Z+00:00 [debug] envoy.pool(30) [C264] attaching to next stream
2024-03-14T03:49:29.337Z+00:00 [debug] envoy.pool(30) [C264] creating stream
2024-03-14T03:49:29.337Z+00:00 [debug] envoy.router(30) [C263][S16574892551486916886] pool ready
2024-03-14T03:49:29.337Z+00:00 [debug] envoy.client(30) [C264] encode complete
2024-03-14T03:49:29.366Z+00:00 [debug] envoy.filter(29) original_dst: set destination to 172.26.18.18:6380
2024-03-14T03:49:29.366Z+00:00 [debug] envoy.filter(29) [C265] new tcp proxy session
2024-03-14T03:49:29.366Z+00:00 [debug] envoy.filter(29) [C265] Creating connection to cluster original-destination
2024-03-14T03:49:29.367Z+00:00 [debug] envoy.misc(29) Allocating TCP conn pool
2024-03-14T03:49:29.367Z+00:00 [debug] envoy.pool(29) trying to create new connection
2024-03-14T03:49:29.367Z+00:00 [debug] envoy.pool(29) creating a new connection (connecting=0)
2024-03-14T03:49:29.367Z+00:00 [debug] envoy.connection(29) [C266] connecting to 172.26.18.18:6380
2024-03-14T03:49:29.367Z+00:00 [debug] envoy.connection(29) [C266] connection in progress
2024-03-14T03:49:29.367Z+00:00 [debug] envoy.conn_handler(29) [C265] new connection from 192.168.123.141:43482
2024-03-14T03:49:29.367Z+00:00 [debug] envoy.connection(29) [C266] connected
2024-03-14T03:49:29.367Z+00:00 [debug] envoy.pool(29) [C266] attaching to next stream
2024-03-14T03:49:29.367Z+00:00 [debug] envoy.pool(29) [C266] creating stream
2024-03-14T03:49:29.367Z+00:00 [debug] envoy.filter(29) [C265] TCP:onUpstreamEvent(), requestedServerName:
2024-03-14T03:49:29.381Z+00:00 [debug] envoy.http(30) [C235] new stream
2024-03-14T03:49:29.382Z+00:00 [debug] envoy.http2(30) [C235] sent reset code=2
2024-03-14T03:49:29.382Z+00:00 [debug] envoy.http2(30) [C235] stream 1 closed: 2
2024-03-14T03:49:29.382Z+00:00 [debug] envoy.http(30) [C235][S16660825237406255215] stream reset: reset reason: remote reset, response details: http2.too_many_headers
2024-03-14T03:49:29.383Z+00:00 [debug] envoy.http2(30) [C235] Recouping 0 bytes of flow control window for stream 1.
{"downstream_remote_address":"192.168.123.141:52538","path":null,"requested_server_name":null,"upstream_host":null,"route_name":null,"user_agent":null,"start_time":"2024-03-14T03:49:29.381Z","upstream_cluster":null,"downstream_local_address":"10.98.103.16:5000","response_flags":"-","upstream_local_address":null,"upstream_service_time":null,"bytes_received":0,"request_id":null,"x_forwarded_for":null,"authority":null,"protocol":"HTTP/2","connection_termination_details":null,"method":null,"upstream_transport_failure_reason":null,"bytes_sent":0,"duration":0,"response_code_details":"http2.too_many_headers","response_code":0}
2024-03-14T03:49:29.383Z+00:00 [debug] envoy.client(30) [C264] response complete
2024-03-14T03:49:29.383Z+00:00 [debug] envoy.pool(30) [C264] destroying stream: 0 remaining
2024-03-14T03:49:29.383Z+00:00 [debug] envoy.router(30) [C263][S16574892551486916886] upstream headers complete: end_stream=true
2024-03-14T03:49:29.386Z+00:00 [debug] envoy.http(30) [C263][S16574892551486916886] encoding headers via codec (end_stream=true):
':status', '200'
'content-type', 'application/grpc'
'grpc-status', '13'
'grpc-message', 'stream terminated by RST_STREAM with error code: INTERNAL_ERROR'
'x-envoy-upstream-service-time', '46'
'date', 'Thu, 14 Mar 2024 03:49:28 GMT'
'server', 'envoy'
2024-03-14T03:49:29.386Z+00:00 [debug] envoy.http(30) [C263][S16574892551486916886] Codec completed encoding stream.
2024-03-14T03:49:29.386Z+00:00 [debug] envoy.http2(30) [C263] stream 1 closed: 0
2024-03-14T03:49:29.386Z+00:00 [debug] envoy.http2(30) [C263] Recouping 0 bytes of flow control window for stream 1.
2024-03-14T03:49:29.386Z+00:00 [debug] envoy.http2(30) [C264] stream 1 closed: 0
2024-03-14T03:49:29.386Z+00:00 [debug] envoy.http2(30) [C264] Recouping 0 bytes of flow control window for stream 1.
2024-03-14T03:49:29.427Z+00:00 [debug] envoy.filter(29) original_dst: set destination to 172.26.17.17:4223
2024-03-14T03:49:29.427Z+00:00 [debug] envoy.filter(29) [C267] new tcp proxy session
2024-03-14T03:49:29.427Z+00:00 [debug] envoy.filter(29) [C267] Creating connection to cluster original-destination
2024-03-14T03:49:29.427Z+00:00 [debug] envoy.misc(29) Allocating TCP conn pool
2024-03-14T03:49:29.427Z+00:00 [debug] envoy.pool(29) trying to create new connection
2024-03-14T03:49:29.427Z+00:00 [debug] envoy.pool(29) creating a new connection (connecting=0)
2024-03-14T03:49:29.427Z+00:00 [debug] envoy.connection(29) [C268] connecting to 172.26.17.17:4223
2024-03-14T03:49:29.427Z+00:00 [debug] envoy.connection(29) [C268] connection in progress
2024-03-14T03:49:29.427Z+00:00 [debug] envoy.conn_handler(29) [C267] new connection from 192.168.123.141:57992
2024-03-14T03:49:29.427Z+00:00 [debug] envoy.connection(29) [C268] connected
2024-03-14T03:49:29.427Z+00:00 [debug] envoy.pool(29) [C268] attaching to next stream
2024-03-14T03:49:29.427Z+00:00 [debug] envoy.pool(29) [C268] creating stream
2024-03-14T03:49:29.427Z+00:00 [debug] envoy.filter(29) [C267] TCP:onUpstreamEvent(), requestedServerName:
2024-03-14T03:49:30.350Z+00:00 [debug] envoy.main(15) flushing stats
2024-03-14T03:49:35.346Z+00:00 [debug] envoy.main(15) flushing stats
2024-03-14T03:49:36.072Z+00:00 [debug] envoy.conn_handler(29) [C269] new connection from 172.25.135.171:52378
2024-03-14T03:49:36.073Z+00:00 [debug] envoy.connection(29) [C269] closing socket: 0
2024-03-14T03:49:36.073Z+00:00 [debug] envoy.conn_handler(29) [C269] adding to cleanup list
{"protocol":"HTTP/2","upstream_local_address":"127.0.0.1:40316","response_flags":"-","request_id":"cf37f903-840e-4387-aca8-b36ede78aeed","bytes_received":5,"duration":56,"authority":"sb-iaas-vertix-middleware-user:5000","upstream_transport_failure_reason":null,"response_code_details":"via_upstream","connection_termination_details":null,"path":"/api.iass.portal.user.PortalUserService/GetUser","start_time":"2024-03-14T03:49:29.330Z","upstream_service_time":"46","method":"POST","requested_server_name":null,"downstream_local_address":"192.168.123.141:20000","upstream_host":"127.0.0.1:5000","response_code":200,"route_name":null,"x_forwarded_for":"10.3.15.41,10.3.15.41, 192.168.120.0","user_agent":"grpc-go/1.61.0","upstream_cluster":"local_app","bytes_sent":0,"downstream_remote_address":"192.168.120.0:0"}
Is there anyway that we can increase the header size/count? I believe that the default max is 100 headers, but ours only have 17-20 of headers? Or is this a bug?
The text was updated successfully, but these errors were encountered:
Hi @ttn-nguyen42, do you find any solution on this ?
Hi @hanstf, the problem was due to our system using large headers, much larger than the default limit of 60KB. With a little bit of tweaking using Consul's ServiceDefaults and ProxyDefaults, we have managed to extend the limit of 96KB (which is the maximum amount allowed by Envoy) but figured that it's probably impossible to go pass this limit.
thanks, thats interesting, I thought that http2.too_many_headers is only thrown when it hit the max_headers_count limit instead of the max_request_headers_kb
Currently we are using L4 TCP proxy, which does not offer load balancing at all. Therefore, we transitioned to using gRPC protocol and also tested with HTTP/2.
However, while testing and debugging with access logs, we couldn't connect the services together due to Envoy giving
http2.too_many_headers
error.Here's a piece of our access log:
Is there anyway that we can increase the header size/count? I believe that the default max is 100 headers, but ours only have 17-20 of headers? Or is this a bug?
The text was updated successfully, but these errors were encountered: