Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cleanup IPVS / iptables rules during kubeadm reset #2587

Closed
dilyanpalauzov opened this issue Oct 8, 2021 · 16 comments
Closed

Cleanup IPVS / iptables rules during kubeadm reset #2587

dilyanpalauzov opened this issue Oct 8, 2021 · 16 comments
Labels
kind/design Categorizes issue or PR as related to design. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence.
Milestone

Comments

@dilyanpalauzov
Copy link

Calling kubeadm init && kubeadm reset prints:

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

Since kube-proxy --cleanup is supposed to cleanup these things, kubeadm reset shall just call kube-proxy --cleanup.

@neolit123
Copy link
Member

neolit123 commented Oct 8, 2021

Since kube-proxy --cleanup is supposed to cleanup these things, kubeadm reset shall just call kube-proxy --cleanup.

kube-proxy is containerized and run in a daemonset managed pod on all nodes.
it is never run on the host directly, so unclear if adding --cleanup would help with this cleanup.

also unclear whether --cleanup will be affected by the plans to move kube-proxy to a rootless container.
#2410

@aojea @rikatz @andrewsykim
do you happen to know more about this --cleanup flag?

@neolit123 neolit123 added priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. kind/feature Categorizes issue or PR as related to a new feature. labels Oct 8, 2021
@neolit123
Copy link
Member

neolit123 commented Oct 8, 2021

making kubeadm reset exec into the right kube-proxy pod for that particular node also seems very hacky.

also i'm pretty sure that the kubelet adds iptables rules too? which means the kube-proxy only cleanup would be incomplete.

@aojea
Copy link
Member

aojea commented Oct 8, 2021

do you happen to know more about this --cleanup flag?

kube-proxy creates the iptables rules in the host namespace, it runs as host-network and mount the required path to use iptables

        image: k8s.gcr.io/kube-proxy:v1.22.1
        imagePullPolicy: IfNotPresent
        name: kube-proxy
        resources: {}
        securityContext:
          privileged: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /var/lib/kube-proxy
          name: kube-proxy
        - mountPath: /run/xtables.lock
          name: xtables-lock
        - mountPath: /lib/modules
          name: lib-modules
          readOnly: true
      dnsPolicy: ClusterFirst
      hostNetwork: true

also i'm pretty sure that the kubelet adds iptables rules too? which means the kube-proxy only cleanup would be incomplete.

that flag deletes the kube-proxy rules, indeed kubelet adds rules too.

making kubeadm reset exec into the right kube-proxy pod for that particular node also seems very hacky.

indeed, these things are not simple, but one could argue that if you want to start from scratch if could be legit to ask to reset the whole iptables rules,

@dilyanpalauzov
Copy link
Author

How about ipvs? Does kubelet utilize ipvs/ipset directly, or is this just done by kube-proxy?

In any case, calling kube-proxy --cleanup by kubeadm reset is a step in the right direction.

@dilyanpalauzov
Copy link
Author

See also kubernetes/kubelet#32 - add an option to delete the iptables, created by kubelet.

@neolit123
Copy link
Member

neolit123 commented Oct 9, 2021

Thanks for the reply @aojea

I think overal the cleanup of rules after ipvs or iptables cteated by components during cluster creation is a reasonable request. It is borderline out of scope, because reset is not a complete node cleanup. It's a best effort kubeadm node reset. Its main pupose is to allow subsequent init / join in a non blocking way and does not clean e.g. downloaded images or CNI plugin configuration too.

If someone wishes to work on such an enhancement for ipvs / iptables the best way would be to write a KEP that includes full implementation details and test coverage:
https://github.com/kubernetes/enhancements/tree/master/keps

Notable blockers here are that the kubelet lacks the cleanup option, while kube-proxy is an optional component, so detecting if kubeadm deployed it can be tricky. We do not want to clean rules from a third party kube-proxy.

@neolit123 neolit123 added kind/design Categorizes issue or PR as related to design. priority/backlog Higher priority than priority/awaiting-more-evidence. and removed priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Oct 9, 2021
@neolit123 neolit123 added this to the Next milestone Oct 9, 2021
@neolit123 neolit123 changed the title Call kube-proxy --cleanup during kubeadm reset Cleanup IPVS / iptables rules during kubeadm reset Oct 9, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 18, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 17, 2022
@RA489
Copy link
Contributor

RA489 commented Mar 11, 2022

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Mar 11, 2022
@wangyysde
Copy link
Member

wangyysde commented Jun 7, 2022

/cc @wangyysde

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 5, 2022
@RA489
Copy link
Contributor

RA489 commented Sep 5, 2022

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 5, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 4, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 3, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Feb 2, 2023
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/design Categorizes issue or PR as related to design. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

7 participants