-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: make the bootstrapper exit cleanly on subsequent runs #174
Conversation
I have tested this change locally with microk8s by doing the following:
docker build . -t localhost:32000/autocert-bootstrapper -f bootstrapper/Dockerfile
docker build . -t localhost:32000/autocert-controller -f controller/Dockerfile
docker build . -t localhost:32000/autocert-renewer -f renewer/Dockerfile
docker push localhost:32000/autocert-bootstrapper
docker push localhost:32000/autocert-controller
docker push localhost:32000/autocert-renewer
autocert:
image:
repository: localhost:32000/autocert-controller
tag: latest
pullPolicy: Always
bootstrapper:
image:
repository: localhost:32000/autocert-bootstrapper
tag: latest
pullPolicy: Always
renewer:
image:
repository: localhost:32000/autocert-renewer
tag: latest
pullPolicy: Always
I can confirm that following these steps, I do see a log message like the following produced by the bootstrapper the second time around:
|
Hey @voxeljorge 👋 . Pleasure to e-meet you and thanks for the PR! We'll be taking a look at this shortly (within the next few weeks) - bogged down with some other work at the moment. Cheers! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @voxeljorge lgtm
172596c
Name of feature:
Make
autocert-bootstrapper
exit cleanly on subsequent runsPain or issue this feature alleviates:
Resolve #173
Why is this important to the project (if not answered above):
Hopefully it fixes a bug
Is there documentation on how to use this feature? If so, where?
I'm not sure additional documentation is necessary here.
In what environments or workflows is this feature supported?
Any environment where pods run for a long time.
In what environments or workflows is this feature explicitly NOT supported (if any)?
None specifically
Supporting links/other PRs/issues:
According to the Kubernetes docs pods can be restarted for a couple of reasons, and in the current setup the bootstrapper will fail on subsequent runs. https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#pod-restart-reasons