Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Monitoring rds service delay is too large. #1590

Open
1 task done
jeffaryhe opened this issue Dec 12, 2024 · 1 comment
Open
1 task done

Monitoring rds service delay is too large. #1590

jeffaryhe opened this issue Dec 12, 2024 · 1 comment
Labels
bug Something isn't working

Comments

@jeffaryhe
Copy link

Is there an existing issue for this?

  • I have searched the existing issues

YACE version

v0.61.2

Config file

apiVersion: v1alpha1
discovery:
jobs:
- type: AWS/RDS
regions:
- ca-central-1
period: 60
length: 120
metrics:
- name: EngineUptime
statistics: [Maximum]
- name: CPUUtilization
statistics: [Average]

Current Behavior

I tested the minimum rds cpu time that can be obtained here, which caused the data delay to be too large and unable to accurately monitor alarms. Delay for at least 3 minutes to reflect the accurate alarm value.

Expected Behavior

Can obtain monitoring indicators within 60 seconds

Steps To Reproduce

no

Anything else?

No response

@jeffaryhe jeffaryhe added the bug Something isn't working label Dec 12, 2024
@tyagian
Copy link

tyagian commented Jan 14, 2025

I suggest you to fix the period and length time. Its good to keep them same.

The period parameter in your YACE config is set to 60, meaning it fetches CloudWatch metrics at 1-minute intervals. However, the length parameter is set to 120, which means YACE fetches data from the last 2 minutes. This can cause YACE to pull older data, resulting in delays.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants