Cloud Native | Kubernetes - kubectl Cheat Sheet

Table of contents

Kubectl autocompletion

BASH

ZSH

A note about --all-namespaces

Kubectl context and configuration

Kubectl apply

create object

View and find resources

update resources

Partially updated resources

edit resource

Scale up resources

delete resource

Interact with running pods

Copy files and directories from the container

Interact with Deployments and Services

Interact with nodes and clusters

Resource Type

formatted output

Kubectl log output verbosity and debugging

Kubectl autocompletion

BASH

source <(kubectl completion bash) # To set the auto-completion of the current shell in bash, the bash-completion package must be installed first.
echo "source <(kubectl completion bash)" >> ~/.bashrc # Add autocompletion permanently to your bash shell

You can also use a shorthand alias for kubectl when completing:

alias k=kubectl
complete -o default -F __start_kubectl k

ZSH

source <(kubectl completion zsh)  # Set current shell autocompletion in zsh
echo '[[ $commands[kubectl] ]] && source <(kubectl completion zsh)' >> ~/.zshrc # Add autocompletion permanently to your zsh shell

A note about --all-namespaces

We often use the --all-namespaces parameter, you need to know its shorthand:

kubectl -A

Kubectl context and configuration

Set which Kubernetes cluster kubectl communicates with and modify configuration information.

kubectl config view # Show the merged kubeconfig configuration.

# Use multiple kubeconfig files at the same time and view the merged configuration
KUBECONFIG=~/.kube/config:~/.kube/kubconfig2

kubectl config view

# Get the password of the e2e user
kubectl config view -o jsonpath='{.users[?(@.name == "e2e")].user.password}'

kubectl config view -o jsonpath='{.users[].name}'    # show first user
kubectl config view -o jsonpath='{.users[*].name}'   # get user list
kubectl config get-contexts                          # show context list
kubectl config current-context                       # Show current context
kubectl config use-context my-cluster-name           # Set the default context to my-cluster-name

kubectl config set-cluster my-cluster-name           # Set cluster entries in kubeconfig

# Configure the URL of the proxy server in kubeconfig to use for this client's requests
kubectl config set-cluster my-cluster-name --proxy-url=my-proxy-url

# Add new user configuration to kubeconf, use basic auth for authentication
kubectl config set-credentials kubeuser/foo.kubernetes.com --username=kubeuser --password=kubepassword

# Persistently save the namespace in the specified context for use by all subsequent kubectl commands
kubectl config set-context --current --namespace=ggckad-s2

# Set the context with a specific username and namespace
kubectl config set-context gce --user=cluster-admin --namespace=foo \
  && kubectl config use-context gce

kubectl config unset users.foo                       # delete user foo

# Set or display short aliases for context / namespace
# (for bash and bash-compatible shell s only, set current-context before setting namespace with kn)
alias kx='f() { [ "$1" ] && kubectl config use-context $1 || kubectl config current-context ; } ; f'
alias kn='f() { [ "$1" ] && kubectl config set-context --current --namespace $1 || kubectl config view --minify | grep namespace | cut -d" " -f6 ; } ; f'

the

Kubectl apply

apply manages applications through files that define Kubernetes resources. It creates and updates resources in the cluster by running kubectl apply . This is the recommended way to manage Kubernetes applications in production.

 

create object

Kubernetes configuration can be defined in YAML or JSON. Acceptable file extensions are .yaml, .yml, and .json.

kubectl apply -f ./my-manifest.yaml           # Create resources
kubectl apply -f ./my1.yaml -f ./my2.yaml     # Create with multiple files
kubectl apply -f ./dir                        # Create resources based on all manifest files in a directory
kubectl apply -f https://git.io/vPieo # create resource from URL
kubectl create deployment nginx --image=nginx # Start single instance nginx

# Create a Job that prints "Hello World"
kubectl create job hello --image=busybox:1.28 -- echo "Hello World" 

# Create a CronJob that prints "Hello World" every 1 minute
kubectl create cronjob hello --image=busybox:1.28   --schedule="*/1 * * * *" -- echo "Hello World"    

kubectl explain pods                          # Documentation for getting a pod manifest

# Create multiple YAML objects from standard input
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: busybox-sleep
spec:
  containers:
  - name: busybox
    image: busybox:1.28
    args:
    - sleep
    - "1000000"
---
apiVersion: v1
kind: Pod
metadata:
  name: busybox-sleep-less
spec:
  containers:
  - name: busybox
    image: busybox:1.28
    args:
    - sleep
    - "1000"
EOF

# Create a Secret with multiple key s
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
  name: mysecret
type: Opaque
data:
  password: $(echo -n "s33msi4" | base64 -w0)
  username: $(echo -n "jane" | base64 -w0)
EOF

 

View and find resources

# Basic output of the get command
kubectl get services                          # List all services under the current namespace
kubectl get pods --all-namespaces             # List all Pods under all namespaces
kubectl get pods -o wide                      # List all Pods under the current namespace and display more detailed information
kubectl get deployment my-dep                 # List a specific Deployment
kubectl get pods                              # List all Pods under the current namespace
kubectl get pod my-pod -o yaml                # Get a pod's YAML

# Verbose output from the describe command
kubectl describe nodes my-node
kubectl describe pods my-pod

# List all Services under the current namespace, sorted by name
kubectl get services --sort-by=.metadata.name

# List Pods, sorted by restart count
kubectl get pods --sort-by='.status.containerStatuses[0].restartCount'

# List all PV persistent volumes, sorted by capacity
kubectl get pv --sort-by=.spec.capacity.storage

# Get the version tag of all Pods that contain the app=cassandra tag
kubectl get pods --selector=app=cassandra -o \
  jsonpath='{.items[*].metadata.labels.version}'

# Retrieve the key value with ".", for example: 'ca.crt'
kubectl get configmap myconfig \
  -o jsonpath='{.data.ca\.crt}'

# Retrieves a base64-encoded value where the key names should contain minus signs instead of underscores.
kubectl get secret my-secret --template='{{index .data "key-name-with-dashes"}}'

# Get all worker nodes (use selector to exclude results with label name 'node-role.kubernetes.io/control-plane')
kubectl get node --selector='!node-role.kubernetes.io/control-plane'

# Get the running Pods in the current namespace
kubectl get pods --field-selector=status.phase=Running

# Get the ExternalIP addresses of all nodes
kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}'

# List the names of Pods belonging to a specific RC
# The "jq" command is useful where the conversion is too complex for jsonpath; it can be found at https://stedolan.github.io/jq/.
sel=${$(kubectl get rc my-rc --output=json | jq -j '.spec.selector | to_entries | .[] | "\(.key)=\(.value),"')%?}
echo $(kubectl get pods --selector=$sel --output=jsonpath={.items..metadata.name})

# Display labels for all Pods (or any other Kubernetes object that supports labels)
kubectl get pods --show-labels

# Check which nodes are in ready state
JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \
 && kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True"

# Output the decoded Secret without using external tools
kubectl get secret my-secret -o go-template='{{range $k,$v := .data}}{{"### "}}{{$k}}{{"\n"}}{{$v|base64decode}}{{"\n\n"}}{{end}}'

# List all Secret s used by a Pod
kubectl get pods -o json | jq '.items[].spec.containers[].env[]?.valueFrom.secretKeyRef.name' | grep -v null | sort | uniq

# List the container ID (containerID) of the initialized container in all Pods
# Can be used to avoid deleting init containers when cleaning up stopped containers
kubectl get pods --all-namespaces -o jsonpath='{range .items[*].status.initContainerStatuses[*]}{.containerID}{"\n"}{end}' | cut -d/ -f3

# List events (Events), sorted by timestamp
kubectl get events --sort-by=.metadata.creationTimestamp

# list all warning events
kubectl events --types=Warning

# Compare the current cluster state with the cluster state assuming a manifest is applied
kubectl diff -f ./my-manifest.yaml

# Generate a period-separated tree containing all keys returned for a node
# Useful when locating keys in complex nested JSON structures
kubectl get nodes -o json | jq -c 'paths|join(".")'

# Produces a period-separated tree of all keys returned for pod s, etc.
kubectl get pods -o json | jq -c 'paths|join(".")'

# Assuming your Pods have a default container and a default namespace, and support the 'env' command, you can use the following script to generate ENV variables for all Pods.
# This script can also be used to run any supported command in all Pods, not just 'env'. 
for pod in $(kubectl get po --output=jsonpath={.items..metadata.name}); do echo $pod && kubectl exec -it $pod -- env; done

# Get the status subresource of a Deployment
kubectl get deployment nginx-deployment --subresource=status

 

update resources

kubectl set image deployment/frontend www=image:v2               # Rolling updates to the "www" container image for a "frontend" Deployment
kubectl rollout history deployment/frontend                      # Check the Deployment's history, including versions
kubectl rollout undo deployment/frontend                         # Roll back to the last deployed version
kubectl rollout undo deployment/frontend --to-revision=2         # Roll back to a specific deployment version
kubectl rollout status -w deployment/frontend                    # Monitor the rolling upgrade status of the "frontend" Deployment until complete
kubectl rollout restart deployment/frontend                      # Rotate restart "frontend" Deployment

cat pod.json | kubectl replace -f -                              # Replace Pod s with JSON passed to stdin

# Forced replacement, rebuilding resources after deletion. will cause the service to be unavailable.
kubectl replace --force -f ./pod.json

# Create a service for multiple copies of nginx, use port 80 to provide services, and connect to port 8000 of the container.
kubectl expose rc nginx --port=80 --target-port=8000

# Update the image version (label) of a single-container Pod to v4
kubectl get pod mypod -o yaml | sed 's/\(image: myimage\):.*$/\1:v4/' | kubectl replace -f -

kubectl label pods my-pod new-label=awesome                      # add tag
kubectl label pods my-pod new-label-                             # remove label
kubectl annotate pods my-pod icon-url=http://goo.gl/XXBTWq # add annotations
kubectl autoscale deployment foo --min=2 --max=10                # Autoscale capacity for "foo" Deployment

 

Partially updated resources

# Partially update a node
kubectl patch node k8s-node-1 -p '{"spec":{"unschedulable":true}}'

# Update container image; spec.containers[*].name is required. Because it is a merged primary key.
kubectl patch pod valid-pod -p '{"spec":{"containers":[{"name":"kubernetes-serve-hostname","image":"new image"}]}}'

# Update a container's image using a JSON patch with an array of locations
kubectl patch pod valid-pod --type='json' -p='[{"op": "replace", "path": "/spec/containers/0/image", "value":"new image"}]'

# Disable livenessProbe for a Deployment using JSON patch with position array
kubectl patch deployment valid-deployment  --type json   -p='[{"op": "remove", "path": "/spec/template/spec/containers/0/livenessProbe"}]'

# Add an element to a positional array
kubectl patch sa default --type='json' -p='[{"op": "add", "path": "/secrets/1", "value": {"name": "whatever" } }]'

# Update the Deployment's replica count by correcting the scale subresource
kubectl patch deployment nginx-deployment --subresource='scale' --type='merge' -p '{"spec":{"replicas":2}}'

 

edit resource

Edit the API resource using your favorite editor.

kubectl edit svc/docker-registry                      # Edit the service named docker-registry
KUBE_EDITOR="nano" kubectl edit svc/docker-registry   # use another editor

 

Scale up resources

kubectl scale --replicas=3 rs/foo                                 # Scale the replica set named 'foo' to 3 replicas
kubectl scale --replicas=3 -f foo.yaml                            # scale a specific resource in "foo.yaml" to 3 replicas
kubectl scale --current-replicas=2 --replicas=3 deployment/mysql  # If the replicas of the Deployment named mysql are currently 2, scale it to 3
kubectl scale --replicas=5 rc/foo rc/bar rc/baz                   # Scale multiple replica controllers

 

delete resource

kubectl delete -f ./pod.json                                              # Deletes a Pod of the type and name specified in pod.json
kubectl delete pod unwanted --now                                         # Pod deletion with no grace period (no grace period)
kubectl delete pod,service baz foo                                        # Delete Pod s and Services named "baz" and "foo"
kubectl delete pods,services -l name=myLabel                              # Delete pods and services with label name=myLabel
kubectl -n my-ns delete pod,svc --all                                     # Delete all Pods and Services in the my-ns namespace
# Delete all Pods matching pattern1 or pattern2 awk pattern
kubectl get pods  -n mynamespace --no-headers=true | awk '/pattern1|pattern2/{print $1}' | xargs  kubectl delete -n mynamespace pod

 

Interact with running pods

kubectl logs my-pod                                 # Get pod logs (stdout)
kubectl logs -l name=myLabel                        # Get logs (stdout) for Pods with label name=myLabel
kubectl logs my-pod --previous                      # Get pod logs (stdout) for the last container instance
kubectl logs my-pod -c my-container                 # Get logs of Pod containers (stdout, multi-container scenarios)
kubectl logs -l name=myLabel -c my-container        # Get Pod container logs with label name=myLabel (stdout, multi-container scenario)
kubectl logs my-pod -c my-container --previous      # Get the logs of the last instance of a container in the Pod (standard output, multi-container scenario)
kubectl logs -f my-pod                              # Streaming Pod's logs (stdout)
kubectl logs -f my-pod -c my-container              # Streaming logs of Pod containers (stdout, multi-container scenarios)
kubectl logs -f -l name=myLabel --all-containers    # Stream all logs (stdout) for Pod s with label name=myLabel
kubectl run -i --tty busybox --image=busybox:1.28 -- sh  # Run Pod s with an interactive Shell
kubectl run nginx --image=nginx -n mynamespace      # Run a single nginx Pod in the "mynamespace" namespace
kubectl run nginx --image=nginx --dry-run=client -o yaml > pod.yaml
                                                    # Generates a spec for running nginx pods and writes it to a file called pod.yaml

kubectl attach my-pod -i                            # Hook into a running container
kubectl port-forward my-pod 5000:6000               # listen on port 5000 on local machine and forward to port 6000 on my-pod
kubectl exec my-pod -- ls /                         # Run commands in existing Pod s (single-container scenario)
kubectl exec --stdin --tty my-pod -- /bin/sh        # Use an interactive shell to access a running Pod (one container scenario)
kubectl exec my-pod -c my-container -- ls /         # Run commands in existing Pod s (multi-container scenario)
kubectl top pod POD_NAME --containers               # Display monitoring data for a given Pod and its containers
kubectl top pod POD_NAME --sort-by=cpu              # Show metrics for a given Pod sorted by 'cpu' or 'memory'

 

Copy files and directories from the container

kubectl cp /tmp/foo_dir my-pod:/tmp/bar_dir            # Copy the /tmp/foo_dir local directory to /tmp/bar_dir in the Pod in the remote current namespace
kubectl cp /tmp/foo my-pod:/tmp/bar -c my-container    # Copy local files from /tmp/foo to /tmp/bar of a specific container in a remote Pod
kubectl cp /tmp/foo my-namespace/my-pod:/tmp/bar       # Copy the local file from /tmp/foo to /tmp/bar in the specified Pod in the remote "my-namespace" namespace
kubectl cp my-namespace/my-pod:/tmp/foo /tmp/bar       # Copy /tmp/foo from remote Pod to local /tmp/bar

illustrate:

kubectl cp requires the "tar" binary to be present in the container image. If "tar" does not exist, kubectl cp will fail. For advanced use cases, such as symlinks, wildcard expansion, or preserving file permissions, consider using kubectl exec.

tar cf - /tmp/foo | kubectl exec -i -n my-namespace my-pod -- tar xf - -C /tmp/bar  # Copy the local file from /tmp/foo to /tmp/bar in the pod in the remote "my-namespace" namespace
kubectl exec -n my-namespace my-pod -- tar cf - /tmp/foo | tar xf - -C /tmp/bar    # Copy /tmp/foo from remote pod to local /tmp/bar

 

Interact with Deployments and Services

kubectl logs deploy/my-deployment                         # Get the logs of a Deployment's Pod (single-container example)
kubectl logs deploy/my-deployment -c my-container         # Get the logs of a Deployment's Pod (multi-container example)

kubectl port-forward svc/my-service 5000                  # Listen on local port 5000 and forward to Service backend port 5000
kubectl port-forward svc/my-service 5000:my-service-port  # Listen on local port 5000 and forward to Service target port named <my-service-port>

kubectl port-forward deploy/my-deployment 5000:6000       # Listen on local port 5000 and forward to port 6000 in the Pod created by <my-deployment>
kubectl exec deploy/my-deployment -- ls                   # Run the command in the first container of the first Pod in the Deployment (single-container and multi-container examples)

 

Interact with nodes and clusters

kubectl cordon my-node                                                # Mark the node my-node as unschedulable
kubectl drain my-node                                                 # Clear the my-node node to prepare for node maintenance
kubectl uncordon my-node                                              # Mark the my-node node as schedulable
kubectl top node my-node                                              # Display the metric value for a given node
kubectl cluster-info                                                  # Display addresses of master nodes and services
kubectl cluster-info dump                                             # Dump the current cluster state to stdout
kubectl cluster-info dump --output-directory=/path/to/cluster-state   # Output the current cluster state to /path/to/cluster-state

# View existing taints that exist on the current node.
kubectl get nodes -o='custom-columns=NodeName:.metadata.name,TaintKey:.spec.taints[*].key,TaintValue:.spec.taints[*].value,TaintEffect:.spec.taints[*].effect'

# If a taint with the specified key and effect already exists, replace its value with the specified value.
kubectl taint nodes foo dedicated=special-user:NoSchedule

Resource Type

List all supported resource types with their abbreviations, API group, namespace and kind.

kubectl api-resources

Additional operations for exploring API resources:

kubectl api-resources --namespaced=true      # All namespace-scoped resources
kubectl api-resources --namespaced=false     # All non-namespace-scoped resources
kubectl api-resources -o name                # List all resources in simple format (resource names only)
kubectl api-resources -o wide                # List all resources in extended format (aka "wide" format)
kubectl api-resources --verbs=list,get       # All resources that support the "list" and "get" request verbs
kubectl api-resources --api-group=extensions # All resources in the "extensions" API group

formatted output

To output detailed information to a terminal window in a specific format, add the -o (or --output ) argument to the supported kubectl commands.

output formatdescribe
-o=custom-columns=<spec>Print table with custom columns separated by commas
-o=custom-columns-file=<filename>Print table using custom column template in <filename> file
-o=jsonOutput API object in JSON format
-o=jsonpath=<template>Prints the fields defined in the jsonpath expression
-o=jsonpath-file=<filename>Print the fields specified by the jsonpath expression defined in the <filename> file.
-o=nameprint only the resource name and nothing else
-o=wideOutput additional information in plain text, for Pod s, the output includes the node name
-o=yamlOutput API object in YAML format

Example using -o=custom-columns:

# All images running in the cluster
kubectl get pods -A -o=custom-columns='DATA:spec.containers[*].image'

# List all images running in the default namespace, grouped by Pod
kubectl get pods --namespace default --output=custom-columns="NAME:.metadata.name,IMAGE:.spec.containers[*].image"

# All mirrors except "registry.k8s.io/coredns:1.6.2"
kubectl get pods -A -o=custom-columns='DATA:spec.containers[?(@.image!="registry.k8s.io/coredns:1.6.2")].image'

# Output all fields under metadata, regardless of Pod name
kubectl get pods -A -o=custom-columns='DATA:metadata.*'

Kubectl log output verbosity and debugging

The verbosity of Kubectl log output is controlled by -v or --v, and the parameter is followed by a number to indicate the level of the log.  

level of detaildescribe
--v=0For information that should always be visible to ops, as it is generally useful.
--v=1This value is a reasonable default log level if you don't want to see redundant messages.
--v=2Output information about the steady state of the service, as well as important log messages that may be related to significant changes in the system. This is the default log level recommended for most systems.
--v=3Contains extended information about system state changes.
--v=4Contains redundant information at debug level.
--v=5The verbosity of the trace level.
--v=6Display the requested resource.
--v=7Show HTTP request headers.
--v=8Displays the HTTP request content.
--v=9Display the HTTP request content without truncating the content.

Tags: Docker Kubernetes server Container Cloud Native

Posted by jbbaxx on Sun, 29 Jan 2023 19:31:22 +1030