-
Notifications
You must be signed in to change notification settings - Fork 715
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cleanup IPVS / iptables rules during kubeadm reset
#2587
Comments
kube-proxy is containerized and run in a daemonset managed pod on all nodes. also unclear whether @aojea @rikatz @andrewsykim |
making also i'm pretty sure that the kubelet adds iptables rules too? which means the kube-proxy only cleanup would be incomplete. |
kube-proxy creates the iptables rules in the host namespace, it runs as host-network and mount the required path to use iptables
that flag deletes the kube-proxy rules, indeed kubelet adds rules too.
indeed, these things are not simple, but one could argue that if you want to start from scratch if could be legit to ask to reset the whole iptables rules, |
How about ipvs? Does kubelet utilize ipvs/ipset directly, or is this just done by kube-proxy? In any case, calling |
See also kubernetes/kubelet#32 - add an option to delete the iptables, created by kubelet. |
Thanks for the reply @aojea I think overal the cleanup of rules after ipvs or iptables cteated by components during cluster creation is a reasonable request. It is borderline out of scope, because reset is not a complete node cleanup. It's a best effort kubeadm node reset. Its main pupose is to allow subsequent init / join in a non blocking way and does not clean e.g. downloaded images or CNI plugin configuration too. If someone wishes to work on such an enhancement for ipvs / iptables the best way would be to write a KEP that includes full implementation details and test coverage: Notable blockers here are that the kubelet lacks the cleanup option, while kube-proxy is an optional component, so detecting if kubeadm deployed it can be tricky. We do not want to clean rules from a third party kube-proxy. |
kube-proxy --cleanup
during kubeadm reset
kubeadm reset
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
/cc @wangyysde |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Calling
kubeadm init && kubeadm reset
prints:Since kube-proxy --cleanup is supposed to cleanup these things,
kubeadm reset
shall just callkube-proxy --cleanup
.The text was updated successfully, but these errors were encountered: