-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat/leader election mechanism using leases #75
base: main
Are you sure you want to change the base?
Conversation
@@ -207,3 +218,86 @@ func (c client) addWorkloadEvent(eventType, reason, id, message string, workload | |||
} | |||
return nil | |||
} | |||
|
|||
// CreateOrUpdateLease attempts to acquire and maintain a lease for leadership. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The kubernetes package/client should only have very basic functions which can in most cases basically be linked to a single api request to the cluster. So I would make a leader package for all the logic and only have a function here to create/update a lease object.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Had a first look at the changes. What I've found is that there is actually a leader election package from k8s. I think we should use that one since it makes leader election easier and less verbose.
https://pkg.go.dev/k8s.io/client-go/tools/leaderelection
I've also found this articel which gives an example of how to use it https://dev.to/sklarsa/how-to-add-kubernetes-powered-leader-election-to-your-go-apps-57jh
{{/* | ||
Create defined permissions for lease role | ||
*/}} | ||
{{- define "go-kube-downscaler.leases.permissions" -}} | ||
- apiGroups: | ||
- coordination.k8s.io | ||
resources: | ||
- leases | ||
verbs: | ||
- get | ||
- create | ||
- watch | ||
- list | ||
- update | ||
- delete | ||
{{- end -}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be worth another discussion, but i think it is worth it keeping the lease role separate (like @samuel-esp already did). Since otherwise the downscaler will have cluster wide access to all lease objects even though it only needs it in its own namespace.
{{/* | ||
Create defined permissions for lease role | ||
*/}} | ||
{{- define "go-kube-downscaler.leases.permissions" -}} | ||
- apiGroups: | ||
- coordination.k8s.io | ||
resources: | ||
- leases | ||
verbs: | ||
- get | ||
- create | ||
- watch | ||
- list | ||
- update | ||
- delete |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should put this directly in the role manifest, since the role is always going to stay the same and i think the main reason we put the permissions here for the other roles is because both roles shared their permissions.
Motivation
In order to avoid conflicts among multiple replicas when modifying resources (see #68) it is necessary to implement a leader election mechanism that elects a leader among the available replicas. The feature should be implemented using Leases (see Kubernetes Leases Explained)
Changes
main.go
that stores the state of the replica (isLeader
)client.go
to manage lease lifecycle used by the nested GoRoutine mentioned above (CreateOrUpdateLease
andDeleteLease
). The first function calls the functionGetCurrentNamespaceFromFile
fromutil.go
(which reads the ServiceAccount used by the downscaler) to understand the namespace where it needs to create the lease objectRole
andRoleBinding
to the Helm chart that allows the downscaler to create, get, update and deleteLeases
inside its own namespaceTests done
TODO