written in front
This article records the common operations of k8s. After all, a good memory is not as good as a bad pen.
1: Common operations
1.1: pod view log
dongyunqi@dongyunqi-virtual-machine:~/test$ kubectl logs busy-pod ubuntu, on
Because there is only one container in the pod, there is no need to specify the container. When there are multiple containers, you need to specify the container, as follows:
dongyunqi@dongyunqi-virtual-machine:~/test$ kubectl logs ngx-num1-pod -c ngx1111 /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/ /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh ...
1.2: Query POD
It can be viewed directly or filtered by label, as follows:
dongyunqi@mongodaddy:~/k8s$ kubectl get pod NAME READY STATUS RESTARTS AGE ngx-dep-name-dcc8b7bfd-7lbrq 1/1 Running 0 17m ngx-dep-name-dcc8b7bfd-9m2rs 1/1 Running 0 17m ngx-dep-name-dcc8b7bfd-wp4zt 1/1 Running 0 17m redis-ds-5jf5x 1/1 Running 1 (4h34m ago) 40h redis-ds-n8p45 1/1 Running 1 (4h34m ago) 40h 5: can be based on POD of label to filter POD,support use==,!=,in,notin and other expressions, as follows: dongyunqi@mongodaddy:~/k8s$ kubectl get pod -l 'app in (ngx, nginx, ngx-dep-pod)' NAME READY STATUS RESTARTS AGE ngx-dep-name-577dd8d59b-4xkd2 1/1 Running 0 3m34s ngx-dep-name-577dd8d59b-dcx8k 1/1 Running 0 3m34s ngx-dep-name-577dd8d59b-f2kxp 1/1 Running 0 3m34s dongyunqi@mongodaddy:~/k8s$ kubectl get pod -l 'app=ngx-dep-pod' NAME READY STATUS RESTARTS AGE ngx-dep-name-577dd8d59b-4xkd2 1/1 Running 0 3m59s ngx-dep-name-577dd8d59b-dcx8k 1/1 Running 0 3m59s ngx-dep-name-577dd8d59b-f2kxp 1/1 Running 0 3m59s
1.3: View the containers running in the pod
1.3.1: describe
dongyunqi@mongodaddy:~/k8s$ kubectl describe pod ngx-dep-name-dcc8b7bfd-7lbrq | grep Events -A 100 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 34m default-scheduler Successfully assigned default/ngx-dep-name-dcc8b7bfd-7lbrq to mongomummy # Node allocated by pod Normal Pulled 34m kubelet Container image "nginx:alpine" already present on machine # Create the image used by the container Normal Created 34m kubelet Created container nginx # Create a container, displaying the name of the container Normal Started 34m kubelet Started container nginx dongyunqi@mongodaddy:~/k8s$ kubectl describe pod ngx-dep-name-dcc8b7bfd-7lbrq | grep Containers: -A 100 Containers: # All container information nginx: # the name of the container Container ID: docker://87afa9c8b6c1a9c8434a5dcbe39eb55863ecff4c91a68b408a3fd5f11b18a86f Image: nginx:alpine # The image used by the container Image ID: docker-pullable://nginx@sha256:eb05700fe7baa6890b74278e39b66b2ed1326831f9ec3ed4bdc6361a4ac2f333 Port: 80/TCP Host Port: 0/TCP State: Running Started: Thu, 12 Jan 2023 14:59:05 +0800 Ready: True Restart Count: 0 # When the container restarts Environment: <none> Mounts: /etc/nginx/conf.d from ngx-conf-vol (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wlbhf (ro)
1.3.2: jsonpath filtering
dongyunqi@mongodaddy:~/k8s$ kubectl get pods ngx-dep-name-dcc8b7bfd-7lbrq -o jsonpath={.spec.containers[*].name} nginx
1.4: Troubleshoot abnormal POD through events
When the POD runs abnormally, such as PENDING all the time, you can view the abnormal information through kubectl describe:
dongyunqi@mongodaddy:~/k8s$ kubectl describe pod ngx-dep-name-dcc8b7bfd-7lbrq | grep Events -A 100 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 34m default-scheduler Successfully assigned default/ngx-dep-name-dcc8b7bfd-7lbrq to mongomummy # Node allocated by pod Normal Pulled 34m kubelet Container image "nginx:alpine" already present on machine # Create the image used by the container Normal Created 34m kubelet Created container nginx # Create a container, displaying the name of the container Normal Started 34m kubelet Started container nginx
The display here is normal. If an error occurs, there will be a corresponding prompt in the Message column. The following is a typical error message I found online:
0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 Insufficient memory.
0/5 means that 5 containers are not available, 1 node(s) had taint {node-role.kubernetes.io/master: } one of them is because there is stain , 4 Insufficient memory The other four are due to insufficient memory, so we can prescribe the right medicine.
1.5: View the namespace to which the API pair belongs
dongyunqi@mongodaddy:~/k8s$ kubectl get service -A | egrep 'NAMESP|ngx-svc'| awk '{print $1}' NAMESPACE default
1.6: Get k8s cluster node information
dongyunqi@mongodaddy:~/k8s$ kubectl get node -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME mongodaddy Ready control-plane,master 6h36m v1.23.3 192.168.64.131 <none> Ubuntu 22.04.1 LTS 5.15.0-57-generic docker://20.10.12 mongomummy Ready <none> 6h31m v1.23.3 192.168.64.132 <none> Ubuntu 22.04.1 LTS 5.15.0-57-generic docker://20.10.12
1.7: Get all namespaces
yunqi@mongodaddy:~/k8s$ kubectl get namespaces NAME STATUS AGE default Active 21h kube-flannel Active 20h kube-node-lease Active 21h kube-public Active 21h kube-system Active 21h nginx-ingress Active 13h
1.8: POD is RUNNING but not READY
The reason for this phenomenon is generally that the health check mechanism is configured. The container has started successfully, but the application health check in the container has failed, that is, the application in the container has not started successfully, as follows:
dongyunqi@mongodaddy:~/k8s$ kubectl get pod -n nginx-ingress NAME READY STATUS RESTARTS AGE ngx-kic-dep-67c7cf6d5f-dg5cw 0/1 Running 0 43s
At this point, you can view the application log to locate the specific problem, as shown in the following example:
dongyunqi@mongodaddy:~/k8s$ kubectl logs ngx-kic-dep-67c7cf6d5f-dg5cw -n nginx-ingress I0114 03:23:11.052364 1 main.go:213] Starting NGINX Ingress Controller Version=2.2.2 GitCommit=a88b7fe6dbde5df79593ac161749afc1e9a009c6 Date=2022-05-24T00:33:34Z Arch=linux/amd64 PlusFlag=false .... 2023/01/14 03:23:11 [notice] 12#12: OS: Linux 5.15.0-57-generic 2023/01/14 03:23:11 [notice] 12#12: getrlimit(RLIMIT_NOFILE): 1048576:1048576 2023/01/14 03:23:11 [notice] 12#12: start worker processes 2023/01/14 03:23:11 [notice] 12#12: start worker process 13 I0114 03:23:11.115502 1 leaderelection.go:248] attempting to acquire leader lease nginx-ingress/nginx-ingress-leader-election... W0114 03:23:11.117925 1 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.6/tools/cache/reflector.go:167: failed to list *v1alpha1.TransportServer: the server could not find the requested resource (get transportservers.k8s.nginx.org) E0114 03:23:11.118154 1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.6/tools/cache/reflector.go:167: Failed to watch *v1alpha1.TransportServer: failed to list *v1alpha1.TransportServer: the server could not find the requested resource (get transportservers.k8s.nginx.org) W0114 03:23:11.137598 1 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.6/tools/cache/reflector.go:167: failed to list *v1.Policy: the server could not find the requested resource (get policies.k8s.nginx.org)
The above error example is due to some ingress controllers not being initialized when the ingress controller is deployed.
1.9: Query what API objects k8s has
dongyunqi@mongodaddy:~/k8s$ kubectl api-resources|egrep -w 'Ingress|KIND' NAME SHORTNAMES APIVERSION NAMESPACED KIND ingresses ing networking.k8s.io/v1 true Ingress dongyunqi@mongodaddy:~/k8s$ kubectl api-resources NAME SHORTNAMES APIVERSION NAMESPACED KIND bindings v1 true Binding componentstatuses cs v1 false ComponentStatus ...
2: Frequently asked questions
2.1: POD is always ContainerCreating
View the cause of the error through kubectl describe pod {pod name}, as follows:
dongyunqi@mongodaddy:~/k8s$ kubectl describe pod busy-pod Name: busy-pod ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m5s default-scheduler Successfully assigned default/busy-pod to mongomummy Warning FailedCreatePodSandBox 4m5s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "81294866bfbdb13363f63bab5b2a87434994b7890ec03e601869cb586d1fad87" network for pod "busy-pod": networkPlugin cni failed to set up pod "busy-pod_default" network: open /run/flannel/subnet.env: no such file or directory Warning FailedCreatePodSandBox 4m3s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "533c9ceee9a99087be62bb1efdc624cf72f81397443cd908252abc4cea7b7c1b" network for pod "busy-pod": networkPlugin cni failed to set up pod "busy-pod_default" network: open /run/flannel/subnet.env: no such file or directory ...
You only need to create a network plug-in in the k8s cluster, for details, please refer to This article .