-
Notifications
You must be signed in to change notification settings - Fork 695
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add latency stats around cluster config file operations #1534
base: unstable
Are you sure you want to change the base?
Add latency stats around cluster config file operations #1534
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## unstable #1534 +/- ##
============================================
+ Coverage 70.92% 70.94% +0.01%
============================================
Files 120 120
Lines 65004 65028 +24
============================================
+ Hits 46104 46131 +27
+ Misses 18900 18897 -3
|
When the cluster changes, we need to persist the cluster configuration, and these file IO operations may cause latency. Signed-off-by: Binbin <binloveplay1314@qq.com>
33047ce
to
523f1e3
Compare
server.cluster->todo_before_sleep &= ~CLUSTER_TODO_FSYNC_CONFIG; | ||
if (valkey_fsync(fd) == -1) { | ||
serverLog(LL_WARNING, "Could not sync tmp cluster config file: %s", strerror(errno)); | ||
goto cleanup; | ||
} | ||
latencyEndMonitor(latency); | ||
latencyAddSampleIfNeeded("cluster-config-fsync", latency); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We've only seen significant time taken in the fsync (both here and in the dir). Have you observed the other operations taking significant amounts of time?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The other one I see most is open (10ms - 5s), which seems to try to get the dir lock something like that. And write sometimes take a few ms. I haven't seen the others in the production environment but i think i do see other take a few ms in the test environment (i dont see the unlink one). I see we have a lot latency around AOF, fsync/rename/write/fstats, so i added it all.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good. I suppose open does also make sense. I am thinking now that since AWS uses EBS volumes, we might see the same type of latency on IO operations that other distributions see.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we also add some sanity tests?
When the cluster changes, we need to persist the cluster configuration,
and these file IO operations may cause latency.