Write in front
K8S can check the health status of Pod through two types of probes: livenessProbe and readnessprobe. kubelet can diagnose the health status of the container by regularly executing these two types of probes.
Introduction to livenessProbe
The survival pointer determines whether the Pod (application container in) is healthy, which can be understood as a health check. We use livenessProbe to detect regularly. If the detection is successful, the Pod status can be determined as Running; If the probe fails, kubectl will restart the container according to Pod's restart strategy.
If livenessProbe is not set for Pod, the default probe will always return Success.
When we execute the kubectl get pods command, we can see whether the Pod is in the Running state in the STATUS column of the output information.
Introduction to readinessProbe
Ready pointer. Ready means ready. The ready of the Pod can be understood as the Pod can accept requests and accesses. We use the readinessProbe to detect regularly. If the detection is successful, the ready state of the Pod is determined to be True; If the probe fails, the ready status of the Pod is determined to be False.
Unlike livenessProbe, kubelet does not restart the readinessProbe detection.
When we execute the kubectl get pods command, we can see whether the READY status of the Pod is True in the READY column of the output information.
Define parameters
The definition parameters of livenessProbe and readinessProbe are consistent. You can use kubectl explain pods spec.containers. readinessProbe or kubectl explain pods spec.containers. livenessProbe command:
Several types of ready probes:
httpGet
Send an Http Get request to the container. If the call is successful (judged by the Http status code), it is determined that the Pod is ready;
Usage:
livenessProbe: httpGet: path: /app/healthz port: 80
exec
Execute a command in the container. If the command is executed successfully (judged by the command exit status code of 0), it is determined that the Pod is ready;
Usage:
livenessProbe: exec: command: - cat - /app/healthz
tcpSocket
Open a TCP connection to the specified port of the container. If the connection is successfully established, confirm that the Pod is ready.
Usage:
livenessProbe: tcpSocket: port: 80
Generally, the probe will start after the first period of time when the probe is ready. Therefore, when defining the ready pointer, the following parameters will be given:
- initialDelaySeconds: how many seconds after initializing the container, start the first ready detection;
- timeoutSeconds: if the ready detection fails after more than seconds, it is judged as timeout. The detection fails and the Pod is not ready. The default value is 1, and the minimum value is 1;
- periodSeconds: if the Pod is not ready, the ready detection will be done periodically every few seconds. The default value is 10 and the minimum value is 1;
- failureThreshold: if the previous probe of the container is successful and subsequent successive probes fail, it is determined that the container is not ready. The default value is 3, and the minimum value is 1;
- successThreshold: if the previous probe of the container fails and the subsequent successive probes succeed, it is determined that the container is ready. The default value is 1 and the minimum value is 1.
Use example
At present, I have a test image in docker hub: med1tator/helloweb:v1. After the container is started, there is a health check route / health / return200, and the status code of accessing the route returns 200; There is a check route / health/return404, and the status code of accessing the route returns 404.
readinessProbe example
Before the experiment, first understand the load balancing relationship between Pod and Service: in K8S, Service, as the load balancer of Pod, matches Pod through Label Selector. But this sentence is not complete, because there is another necessary condition: Pod is currently ready. In other words, the Service matches the currently ready pods through the Label Selector. Even if the labelSelector is matched, the not ready pods will not appear in the Service's endpoints. The request will not be forwarded, as shown in the following example.
Example description: we use the med1tator/helloweb:v1 image to start three pods. The labels of the three pods are set to the same in order to match the Service; Three pods. Two readinessprobes use httpGet to probe / health/return200, and the simulation probe succeeds. One readinessProbe uses httpGet to probe / health/return404, and the simulation probe fails.
Write our helloweb readiesprobe Yaml file is as follows:
apiVersion: v1 kind: Pod metadata: name: helloweb1 labels: app: helloweb spec: containers: - name: helloweb image: med1tator/helloweb:v1 readinessProbe: httpGet: path: /healthz/return200 port: 80 initialDelaySeconds: 30 timeoutSeconds: 10 ports: - containerPort: 80 --- apiVersion: v1 kind: Pod metadata: name: helloweb2 labels: app: helloweb spec: containers: - name: helloweb image: med1tator/helloweb:v1 readinessProbe: httpGet: path: /healthz/return200 port: 80 initialDelaySeconds: 30 timeoutSeconds: 10 ports: - containerPort: 80 --- apiVersion: v1 kind: Pod metadata: name: helloweb3 labels: app: helloweb spec: containers: - name: helloweb image: med1tator/helloweb:v1 readinessProbe: httpGet: path: /healthz/return404 port: 80 initialDelaySeconds: 30 timeoutSeconds: 10 ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: helloweb spec: selector: app: helloweb type: ClusterIP ports: - name: http port: 80 targetPort: 80
Run the command to deploy Pod and Service:
kubectl apply -f helloweb-readinessProbe.yaml
After that, let's check the readiness of Pod:
You can see that only helloweb1 and helloweb2 of Pod are currently in ready status, while helloweb3 is not ready. Let's check the endpoints of helloweb service:
You can see that the Service EndPoints only load the IP addresses of helloweb1 and helloweb2 pod s.
View log access:
You can see that a ready probe is made every 10 seconds (the default is 10s for readniessProbe.periodSeconds).
livenessProbe example
Write our helloweb liveness probe Yaml file is as follows:
apiVersion: v1 kind: Pod metadata: name: helloweb4 labels: app: helloweb spec: containers: - name: helloweb image: med1tator/helloweb:v1 livenessProbe: httpGet: path: /healthz/return200 port: 80 initialDelaySeconds: 30 timeoutSeconds: 10 ports: - containerPort: 80 --- apiVersion: v1 kind: Pod metadata: name: helloweb5 labels: app: helloweb spec: containers: - name: helloweb image: med1tator/helloweb:v1 livenessProbe: httpGet: path: /healthz/return404 port: 80 initialDelaySeconds: 30 timeoutSeconds: 10 ports: - containerPort: 80
Run the command to deploy Pod and Service:
kubectl apply -f helloweb-livenessProbe.yaml
After that, let's check the readiness of Pod:
You can see that the STATUS status of helloweb4 is Running, while the STATUS of helloweb5 finally changes to CrashLoopBackOff and is restarting all the time.
I believe that by now, you have a clear understanding of readniessProbe and livenessProbe.