Skip to content

alexarnoldy/Kubernetes

Repository files navigation

Cluster and namespace switchers using tab completion. Add this code block to the .bashrc file:

# Function to switch between K8s clusters with kubeconfig files in ${HOME}/.kube/
k8s_cluster_select ()
{
 unset KUBECONFIG
 export KUBECONFIG=${HOME}/.kube/$1
}

# Function to configure command completion for the k8s_switch_cluster function
_kube_cluster_completion()
{
  local curr_arg;
  curr_arg=${COMP_WORDS[COMP_CWORD]}
  COMPREPLY=( $(compgen -W "- $(ls -1 -p ${HOME}/.kube | grep -v /)" -- $curr_arg ) );
}

complete -F _kube_cluster_completion k8s_cluster_select

# Function to switch between namespaces in the currently selected K8s cluster

k8s_namespace_select ()
{
 read -r -d '' -a CURRENT_CONTEXT < <( kubectl config get-contexts | grep ^* && printf '\0' )
 kubectl config set-context $1 --cluster=$(echo "${CURRENT_CONTEXT[2]}") --user=$(echo "${CURRENT_CONTEXT[3]}") --namespace=$1
 kubectl config use-context $1
}

# Function to  configure command completion for the k8s_namespace_select function

_kube_namespace_completion()
{
 local curr_arg
 curr_arg=${COMP_WORDS[COMP_CWORD]}
 COMPREPLY=( $(compgen -W "- $(kubectl get namespaces -o custom-columns=:.metadata.name | grep -v ^$)" -- $curr_arg ) )
}

complete -F _kube_namespace_completion k8s_namespace_select
Note
To use the functions, start the command (i.e. k8s_) then "c" or "n" and use tab completion to finish the command, then tab completion to select the cluster, or the namespace inside the selected cluster.

TIP: Consume the contents of a secret or configmap to set shell environmental variables:

Note
The secret or configmap keys are the variable keys and the values are the variable values (seems obvious, but maybe not to everyone).
Note
The two step operation might be better for documents to keep repetition down. Just need to source the variables file after creating it.
  • Single operation for environmental variables saved in a configmap:

source <(kubectl get configmap -n cattle-system variables -o json | jq -r '.data | to_entries[] | "\(.key)=\(.value)"' | sed 's/^/export /')
  • Two step operation for environmental variables saved in a configmap:

kubectl get configmap -n cattle-system variables -o json | jq -r '.data | to_entries[] | "\(.key)=\(.value)"' | sed 's/^/export /' > /tmp/variables

source /tmp/variables
  • Single and two step operations for environmental variables saved in a secret:

source <(kubectl get secret -n cattle-system variables -o json | jq -r '.data | to_entries[] | "\(.key)=\(.value | @base64d)"' | sed 's/^/export /')
kubectl get secret -n cattle-system variables -o json | jq -r '.data | to_entries[] | "\(.key)=\(.value | @base64d)"' | sed 's/^/export /' > /tmp/variables

source /tmp/variables

TIP: To make it easy to access multiple clusters with different authentication mechanisms:

  • Add kubeconfig, config and admin.conf files for all of the clusters under ~/.kube

    • Ensure all filenames end with .conf

    • Can use discriptive filenames to organize the clusters

  • Add this line to ~/.bashrc: export KUBECONFIG=$(for EACH in $(ls ~/.kube/*.conf); do echo -n ${EACH}":"; done)

  • Source the file: source ~/.bashrc

  • Verify the configs for all clusters and contexts are available: kubectl config view

  • Review the contexts that are available: kubectl config get-contexts

  • Create a new context (i.e. use a different implicit namespace): kubectl config set-context <name> [--cluster=<name>] [--namespace=<namespace>] [--user=<name>]

  • Specify a context to use: kubectl config use-context <name>

TIP: Find all resources in a namespace (kubectl get all misses a bunch):

NAMESPACE=argocd && for EACH in $(kubectl api-resources --namespaced=true | awk '{print$1}' | grep -v NAME); do echo -n ${EACH}" "; kubectl get ${EACH} -n ${NAMESPACE} 2>/dev/null && echo ""; done

TIP: Set up port forwarding on all IPs on the kubectl enabled system:

kubectl port-forward -n <namespace>--address 0.0.0.0 resource-type/resource {Local port}:{service port}
Ctrl-z
bg
  • Resource can be a service, deploy (or daemonset, job, etc), or pod. Just as long as the specified resource is listening onthe specified port (watch out for services that listen on a differnt port than their endpoints)

  • This can be paired with an SSH tunnel to allow access to a service running behind a NAT router (i.e. libvirt NAT) from a web browser on a laptop:

    • On the node inside the NAT network that has kubectl installed:

kubectl port-forward --address 0.0.0.0 -n jupyter deployment/jupyter-minimal 8888:8888
Ctrl-z
bg
  • On the laptop, connect an SSH tunnel to a host that has access to both the laptop’s network and the node running kubectl:

ssh -L localhost:8888:10.110.1.0:8888 -N sles@172.16.240.102
  • Where:

    • localhost:8888 is the address that the laptop will use to access the webpage

    • 10.110.1.0:8888 is the IP of the node inside the NAT network that is running kubectl and the port that kubectl port-forward is making the service available

    • -N means that a command is not to be sent through the tunnel as part of this invocation

    • sles@172.16.240.102 is the outisde address of the of the node that straddles the laptop’s network and NAT network

Note
To forward multiple ports, must use a -L before each: ssh -L localhost:8443:10.222.11.11:443 -L localhost:8080:10.222.11.11:80 -N k3ai-host-1

TIP: Fix for running containers with Podman after CRI-O has been set up on a node (should be avoided, but can be useful for troubleshooting on that specific node).

  • Note: podman run throws the error "Missing CNI default network"

  • Fix is to add --net=host to the run command, i.e: sudo podman run --rm --net=host nvidia/cuda nvidia-smi

TIP: Expose a service through a specific NodePort:

  • Verify that the service is currently of the type ClusterIP:

kubectl -n cattle-system get svc/rancher
  • If the service is ClusterIP, use the following commands to change it to NodePort and assign it port 30442 on the Linux host:

cat <<EOF> patch-NodePort.yaml
spec:
  type: NodePort
  ports:
    - port: 443
      nodePort: 30443
      name: https-internal
EOF
kubectl patch -n cattle-system svc/rancher --patch "$(cat patch-NodePort.yaml)"
  • Verify the exposed port for the service:

kubectl -n cattle-system get svc/rancher | awk -F442: '{print$2}' | awk -F\/ '{print$1}'
Note
Might need a load balancer and ingress rules if services specifically depend on port 443 being exposed.

TIP: Restart a Kuberentes Job that has completed:

kubectl get pods -n <namespace> -l <selector>=<label | kubectl replace --force -f-

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published