-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrade storage integration test: use TraceWriter
#6437
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #6437 +/- ##
==========================================
- Coverage 96.30% 96.29% -0.02%
==========================================
Files 371 371
Lines 21160 21173 +13
==========================================
+ Hits 20379 20389 +10
- Misses 598 600 +2
- Partials 183 184 +1
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
TraceWriter
TraceWriter
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm, with some small nits. Also please check why CI is not green.
9bd5c7f
to
6c38cd2
Compare
For otlp_json encoding, kafka test fails to get large spans. I'm trying to figure it out why |
I found this error I'd call this a bug in OTEL Kafka exporter, it is not respecting the max message size for Kafka. In other words, if for whatever reason the collector received a very large payload and accepted it, the exporter should not be failing to export it just because it's large, it should try to split the payload into chunks that are of acceptable size. For the purpose of this PR ww can probably just increase this parameter to 3MB (but I am not sure if Kafka's internal configuration also needs to be increased). Alternatively we can change the e2e test to not send the whole trace all at once, but break it into say 1000-span chunks. The ideal fix would be to correct OTEL kafkaexporter to respect the message size. |
for the alternative solution, it is at this point func (w *traceWriter) WriteTraces(ctx context.Context, td ptrace.Traces) error {
// create chunks of trace if span count > 1000
return w.exporter.ConsumeTraces(ctx, td)
} I have a question please:
|
Good question. It's because there is no error when it does export - it sends the payload to OTLP receiver in the collector which accepts it and passes down the pipeline. In the pipeline we have a batch processor that always responds without an error because it groups the spans and then sends then in the background, at which point the error happens but there's no place to report it except for logs. It's a flaw in the OTEL collector batch processor - it could've been implemented to be able to return the error to all clients whose payload failed to be exported in a batch. |
And yes, in the writeTrace function we can split a trace into several chunks. |
I booked a ticket #6439 to have a proper fix upstream, but meanwhile we can fix |
Signed-off-by: Emmanuel Emonueje Ebenezer <eebenezer949@gmail.com>
Signed-off-by: Emmanuel Emonueje Ebenezer <eebenezer949@gmail.com>
Signed-off-by: Emmanuel Emonueje Ebenezer <eebenezer949@gmail.com>
Signed-off-by: Emmanuel Emonueje Ebenezer <eebenezer949@gmail.com>
03b059d
to
9ce3f14
Compare
Signed-off-by: Emmanuel Emonueje Ebenezer <eebenezer949@gmail.com>
Signed-off-by: Emmanuel Emonueje Ebenezer <eebenezer949@gmail.com>
- use standard for traces - upgrade test for `V1TraceToOtelTrace` Signed-off-by: Emmanuel Emonueje Ebenezer <eebenezer949@gmail.com>
Signed-off-by: Emmanuel Emonueje Ebenezer <eebenezer949@gmail.com>
- upgrade test - improve function structure Signed-off-by: Emmanuel Emonueje Ebenezer <eebenezer949@gmail.com>
Signed-off-by: Emmanuel Emonueje Ebenezer <eebenezer949@gmail.com>
Signed-off-by: Emmanuel Emonueje Ebenezer <eebenezer949@gmail.com>
Signed-off-by: Emmanuel Emonueje Ebenezer <eebenezer949@gmail.com>
Signed-off-by: Emmanuel Emonueje Ebenezer <eebenezer949@gmail.com>
Signed-off-by: Emmanuel Emonueje Ebenezer <eebenezer949@gmail.com>
Signed-off-by: Emmanuel Emonueje Ebenezer <eebenezer949@gmail.com>
Signed-off-by: Emmanuel Emonueje Ebenezer <eebenezer949@gmail.com>
Signed-off-by: Emmanuel Emonueje Ebenezer <eebenezer949@gmail.com>
9ce3f14
to
861eea1
Compare
Signed-off-by: Emmanuel Emonueje Ebenezer <eebenezer949@gmail.com>
// Add span1 and span2 | ||
scope1 := resources.At(0).ScopeSpans().At(0) | ||
for i := 1; i <= 2; i++ { | ||
span := scope1.Spans().AppendEmpty() | ||
span.SetSpanID(pcommon.SpanID([8]byte{0, 0, 0, 0, 0, 0, 0, byte(i)})) | ||
span.SetName(fmt.Sprintf("span%d", i)) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not do this in the same loop at L54?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wanted it to be explicit that scope1 and scope3 have two spans while scope 2 has one only
Thanks |
Which problem is this PR solving?
Description of the changes
StorageIntegration
to align with v2 storage api while supporting v1 apiSpanWriter
withTraceWriter
How was this change tested?
Checklist
jaeger
:make lint test
jaeger-ui
:npm run lint
andnpm run test