-
Notifications
You must be signed in to change notification settings - Fork 209
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Auto removal no block #914
Conversation
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## main #914 +/- ##
==========================================
+ Coverage 57.58% 62.66% +5.07%
==========================================
Files 370 338 -32
Lines 17548 17044 -504
==========================================
+ Hits 10105 10680 +575
+ Misses 6848 5807 -1041
+ Partials 595 557 -38
☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall my only concern is around e2e testing. Can we do a stress test on this to validate that under some workload, the current agent falls behind and skips logs, but the agent with this change doesn't? How do we prove that it's deterministic? I think that's the crux of my concern
Yes good call. I can work on adding a E2E which creates files every few seconds and spams them with log lines such that the agent will fall behind trying to read and upload (most likely throttled)... Beware this will be "expensive" since we are actually uploading those log lines. |
Description of the issue
Previously there this issue - #381
Which was fixed by - #452
But now instead of an edge case where the agent will DROP log lines, the issue is that the agent will BLOCK waiting to upload ALL log lines and potentially miss discovering and reporting on a new file.
Consider the following events:
LogFile.FindLogSrc()
is blocking.LogFile.FindLogSrc()
completes, another 2 files matching the monitored pattern are created (File 3 and File 4)If this series of events repeats the host could be left with a bunch of unread, unreported, and not deleted files.
Description of changes
LogFile.FindLogSrc()
will no longer block waiting for the tail to exit. This means there could be more than 1 tail running and reading LogEvents for a single log source. (This is already possible though)License
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
Tests
BEFORE the fix:
AFTER THE FIX:
Requirements
Before commit the code, please do the following steps.
make fmt
andmake fmt-sh
make lint