kubernetes tutorial --Pod life cycle

Pod life cycle

  • pod creation process
  • Run the init container process
  • Run the main container process
    • Hook after container startup (post start), hook before container termination (pre stop)
    • Container liveness probe (Liveness probe), readiness probe (readiness probe)
  • pod termination process

Throughout the life cycle, the Pod will appear in five states (phases), as follows:

  • Pending: The apiserver has created the pod resource object, but it has not been scheduled or is still in the process of downloading the image
  • Running (Running): The pod has been scheduled to a node, and all containers have been created by the kubelet
  • Succeeded: All containers in the pod have terminated successfully and will not be restarted
  • Failed: All containers have terminated, but at least one container failed to terminate, that is, the container returned a non-zero exit status
  • Unknown (Unknown): The apiserver cannot normally obtain the status information of the pod object, usually caused by network communication failure

Create and terminate

pod creation process

  1. The user submits the pod information to be created to apiServer through kubectl or other api clients

  2. apiServer starts to generate the information of the pod object, stores the information in etcd, and then returns the confirmation information to the client

  3. The apiServer starts to reflect the changes of the pod object in etcd, and other components use the watch mechanism to track and check the changes on the apiServer

  4. The scheduler finds that there is a new pod object to be created, starts to assign hosts to the pod and updates the result information to apiServer

  5. The kubelet on the node node finds that a pod is dispatched, tries to call docker to start the container, and sends the result back to apiServer

  6. apiServer stores the received pod status information in etcd

pod termination process

  1. The user sends a command to apiServer to delete the pod object
  2. The pod object information in apiServcer will be updated over time. During the grace period (default 30s), the pod is considered dead
  3. Mark the pod as terminating
  4. The kubelet starts the pod shutdown process while monitoring that the pod object turns to the terminating state
  5. When the endpoint controller monitors the shutdown behavior of the pod object, it will be removed from the endpoint list of all service resources matching this endpoint
  6. If the current pod object defines a preStop hook processor, it will start executing synchronously after it is marked as terminating
  7. The container process in the pod object received a stop signal
  8. After the grace period ends, if there are still running processes in the pod, the pod object will receive an immediate termination signal
  9. kubelet requests apiServer to set the grace period of this pod resource to 0 to complete the deletion operation. At this time, the pod is no longer visible to the user

Initialize the container

The initialization container is a container to be run before the pod's main container is started. It mainly does some pre-work for the main container. It has two characteristics:

  1. The initialization container must run to completion until the end. If an initialization container fails to run, then kubernetes needs to restart it until it completes successfully
  2. The initialization containers must be executed in the defined order, and the subsequent one can run only after the previous one succeeds

There are many application scenarios for initializing containers, the most common ones are listed below:

  • Provide utilities or custom code that are not present in the main container image
  • The initialization container starts and runs serially before the application container, so it can be used to delay the start of the application container until the conditions it depends on are met

Next, make a case to simulate the following requirement:

Suppose you want to run nginx as the main container, but you need to be able to connect to the server where mysql and redis are located before running nginx

In order to simplify the test, the addresses of the mysql(192.168.90.14) and redis(192.168.90.15) servers are specified in advance

# Create pod-initcontainer.yaml with the following content:
apiVersion: v1
kind: Pod
metadata:
  name: pod-initcontainer
  namespace: dev
spec:
  containers:
  - name: main-container
    image: nginx:1.17.1
    ports: 
    - name: nginx-port
      containerPort: 80
  # Initialize the container
  initContainers:
  - name: test-mysql
    image: busybox:1.30
    # custom command
    command: ['sh', '-c', 'until ping 192.168.90.14 -c 1 ; do echo waiting for mysql...; sleep 2; done;']
  - name: test-redis
    image: busybox:1.30
    command: ['sh', '-c', 'until ping 192.168.90.15 -c 1 ; do echo waiting for reids...; sleep 2; done;']

# Create pod s
[root@k8s-master01 ~]# kubectl create -f pod-initcontainer.yaml
pod/pod-initcontainer created

# View pod status
# It is found that the pod is stuck in the process of starting the first initialization container, and the subsequent containers will not run
root@k8s-master01 ~]# kubectl describe pod  pod-initcontainer -n dev
........
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  49s   default-scheduler  Successfully assigned dev/pod-initcontainer to node1
  Normal  Pulled     48s   kubelet, node1     Container image "busybox:1.30" already present on machine
  Normal  Created    48s   kubelet, node1     Created container test-mysql
  Normal  Started    48s   kubelet, node1     Started container test-mysql

# View pod s dynamically
[root@k8s-master01 ~]# kubectl get pods pod-initcontainer -n dev -w
NAME                             READY   STATUS     RESTARTS   AGE
pod-initcontainer                0/1     Init:0/2   0          15s
pod-initcontainer                0/1     Init:1/2   0          52s
pod-initcontainer                0/1     Init:1/2   0          53s
pod-initcontainer                0/1     PodInitializing   0          89s
pod-initcontainer                1/1     Running           0          90s

# Next, open a new shell and add two IPs for the current server, namely mysql and redis to observe the changes of the pod
[root@k8s-master01 ~]# ifconfig ens33:1 192.168.90.14 netmask 255.255.255.0 up
[root@k8s-master01 ~]# ifconfig ens33:2 192.168.90.15 netmask 255.255.255.0 up

# Check the pod status again, the container runs successfully

hook function

The hook function can perceive the events in its own life cycle, and run the program code specified by the user when the corresponding moment arrives.

kubernetes provides two hook functions after starting and before stopping the main container:

  • post start: Executed after the container is created, if it fails, the container will be restarted
  • pre stop : Executed before the container terminates, the container will be terminated successfully after the execution is completed, and the operation of deleting the container will be blocked before it is completed

The hook handler supports the following three ways to define actions:

  • Exec command: execute a command in the container

    ......
      # life cycle
      lifecycle:
      	# hook
        postStart: 
          # exec hook handler
          exec:
            command:
            - cat
            - /tmp/healthy
    ......
    
  • TCPSocket: Try to access the specified socket in the current container

    ......      
      lifecycle:
        postStart:
          tcpSocket:
            port: 8080
    ......
    
  • HTTPGet: initiate an http request to a url in the current container

    ......
      lifecycle:
        postStart:
          httpGet:
            # Try pinging http://192.168.5.3:80
            path: / #URI address
            port: 80 #The port number
            host: 192.168.5.3 #host address
            scheme: HTTP #Supported protocol, http or https
    ......
    

Next, take the exec method as an example to demonstrate the use of the hook function.

# Create a pod-hook-exec.yaml file with the following content
apiVersion: v1
kind: Pod
metadata:
  name: pod-hook-exec
  namespace: dev
spec:
  containers:
  - name: main-container
    image: nginx:1.17.1
    ports:
    - name: nginx-port
      containerPort: 80
    lifecycle:
      postStart: 
        exec: # Execute a command when the container starts to modify the default home page content of nginx
          command: ["/bin/sh", "-c", "echo postStart... > /usr/share/nginx/html/index.html"]
      preStop:
        exec: # Stop nginx service before container stops
          command: ["/usr/sbin/nginx","-s","quit"]
          
          
# Create pod s
[root@k8s-master01 ~]# kubectl create -f pod-hook-exec.yaml
pod/pod-hook-exec created

# view pod s
[root@k8s-master01 ~]# kubectl get pods  pod-hook-exec -n dev -o wide
NAME           READY   STATUS     RESTARTS   AGE    IP            NODE    
pod-hook-exec  1/1     Running    0          29s    10.244.2.48   node2   

# access pod s
[root@k8s-master01 ~]# curl 10.244.2.48
postStart...

container detection

Container detection is used to detect whether the application instance in the container is working normally, and it is a traditional mechanism to ensure business availability. If after detection, the state of the instance does not meet expectations, then kubernetes will "remove" the problem instance and not bear the business traffic. kubernetes provides two probes to implement container detection, namely:

  • liveness probes: liveness probes, used to detect whether the application instance is currently running normally, if not, k8s will restart the container
  • Readiness probes: readiness probes, used to detect whether the application instance can currently receive requests, if not, k8s will not forward traffic

livenessProbe decides whether to restart the container, and readinessProbe decides whether to forward the request to the container.

The above two probes currently support three detection methods:

The above two probes currently support three detection methods:

  • Exec command: Execute a command in the container. If the exit code of command execution is 0, the program is considered normal, otherwise it is abnormal

    ......
      livenessProbe:
        exec:
          command:
          - cat
          - /tmp/healthy
    ......
    
  • TCPSocket: It will try to access the port of a user container. If this connection can be established, the program is considered normal, otherwise it is not normal

    ......      
      livenessProbe:
        tcpSocket:
          port: 8080
    ......
    
  • HTTPGet: Call the URL of the Web application in the container. If the returned status code is between 200 and 399, the program is considered normal, otherwise it is abnormal

    ......
      livenessProbe:
        httpGet:
          path: / #URI address
          port: 80 #The port number
          host: 127.0.0.1 #host address
          scheme: HTTP #Supported protocol, http or https
    ......
    

Let's take liveness probes as an example to make a few demonstrations:

Method 1: Exec

Create pod-liveness-exec.yaml

apiVersion: v1
kind: Pod
metadata:
  name: pod-liveness-exec
  namespace: dev
spec:
  containers:
  - name: nginx
    image: nginx:1.17.1
    ports: 
    - name: nginx-port
      containerPort: 80
    livenessProbe:
      exec:
        command: ["/bin/cat","/tmp/hello.txt"] # Execute a command to view a file

Create a pod and observe the effect

# Create pods
[root@k8s-master01 ~]# kubectl create -f pod-liveness-exec.yaml
pod/pod-liveness-exec created

# View Pod Details
[root@k8s-master01 ~]# kubectl describe pods pod-liveness-exec -n dev
......
  Normal   Created    20s (x2 over 50s)  kubelet, node1     Created container nginx
  Normal   Started    20s (x2 over 50s)  kubelet, node1     Started container nginx
  Normal   Killing    20s                kubelet, node1     Container nginx failed liveness probe, will be restarted
  Warning  Unhealthy  0s (x5 over 40s)   kubelet, node1     Liveness probe failed: cat: can't open '/tmp/hello11.txt': No such file or directory
  
# Observing the above information, you will find that the health check is performed after the nginx container starts
# After the check fails, the container is kill ed, and then tries to restart (this is the role of the restart strategy, which will be explained later)
# After waiting for a while, and observing the pod information, you can see that RESTARTS is no longer 0, but has been increasing
[root@k8s-master01 ~]# kubectl get pods pod-liveness-exec -n dev
NAME                READY   STATUS             RESTARTS   AGE
pod-liveness-exec   0/1     CrashLoopBackOff   2          3m19s

# Of course, next, you can modify it to a correct command, such as "ls /tmp", and try again, the result will be normal...

Method 2: TCPSocket

Create pod-liveness-tcpsocket.yaml

apiVersion: v1
kind: Pod
metadata:
  name: pod-liveness-tcpsocket
  namespace: dev
spec:
  containers:
  - name: nginx
    image: nginx:1.17.1
    ports: 
    - name: nginx-port
      containerPort: 80
    livenessProbe:
      tcpSocket:
        port: 8080 # Try to access port 8080

Create a pod and observe the effect

# Create pods
[root@k8s-master01 ~]# kubectl create -f pod-liveness-tcpsocket.yaml
pod/pod-liveness-tcpsocket created

# View Pod Details
[root@k8s-master01 ~]# kubectl describe pods pod-liveness-tcpsocket -n dev
......
  Normal   Scheduled  31s                            default-scheduler  Successfully assigned dev/pod-liveness-tcpsocket to node2
  Normal   Pulled     <invalid>                      kubelet, node2     Container image "nginx:1.17.1" already present on machine
  Normal   Created    <invalid>                      kubelet, node2     Created container nginx
  Normal   Started    <invalid>                      kubelet, node2     Started container nginx
  Warning  Unhealthy  <invalid> (x2 over <invalid>)  kubelet, node2     Liveness probe failed: dial tcp 10.244.2.44:8080: connect: connection refused
  
# Observing the above information, I found that I tried to access port 8080, but failed
# After waiting for a while, and observing the pod information, you can see that RESTARTS is no longer 0, but has been increasing
[root@k8s-master01 ~]# kubectl get pods pod-liveness-tcpsocket  -n dev
NAME                     READY   STATUS             RESTARTS   AGE
pod-liveness-tcpsocket   0/1     CrashLoopBackOff   2          3m19s

# Of course, next, you can modify it to an accessible port, such as 80, and try again, and the result will be normal...

Method 3: HTTPGet

Create pod-liveness-httpget.yaml

apiVersion: v1
kind: Pod
metadata:
  name: pod-liveness-httpget
  namespace: dev
spec:
  containers:
  - name: nginx
    image: nginx:1.17.1
    ports:
    - name: nginx-port
      containerPort: 80
    livenessProbe:
      httpGet:  # In fact, it is to visit http://127.0.0.1:80/hello  
        scheme: HTTP #Supported protocol, http or https
        port: 80 #The port number
        path: /hello #URI address

Create a pod and observe the effect

# Create pods
[root@k8s-master01 ~]# kubectl create -f pod-liveness-httpget.yaml
pod/pod-liveness-httpget created

# View Pod Details
[root@k8s-master01 ~]# kubectl describe pod pod-liveness-httpget -n dev
.......
  Normal   Pulled     6s (x3 over 64s)  kubelet, node1     Container image "nginx:1.17.1" already present on machine
  Normal   Created    6s (x3 over 64s)  kubelet, node1     Created container nginx
  Normal   Started    6s (x3 over 63s)  kubelet, node1     Started container nginx
  Warning  Unhealthy  6s (x6 over 56s)  kubelet, node1     Liveness probe failed: HTTP probe failed with statuscode: 404
  Normal   Killing    6s (x2 over 36s)  kubelet, node1     Container nginx failed liveness probe, will be restarted
  
# Observe the above information and try to access the path, but it is not found, and a 404 error occurs
# After waiting for a while, and observing the pod information, you can see that RESTARTS is no longer 0, but has been increasing
[root@k8s-master01 ~]# kubectl get pod pod-liveness-httpget -n dev
NAME                   READY   STATUS    RESTARTS   AGE
pod-liveness-httpget   1/1     Running   5          3m17s

# Of course, next, you can modify it to an accessible path path, such as /, try again, and the result will be normal...

So far, liveness Probe has been used to demonstrate three detection methods, but looking at the sub-properties of livenessProbe, you will find that in addition to these three methods, there are some other configurations, which are explained here:

[root@k8s-master01 ~]# kubectl explain pod.spec.containers.livenessProbe
FIELDS:
   exec <Object>  
   tcpSocket    <Object>
   httpGet      <Object>
   initialDelaySeconds  <integer>  # How many seconds to wait for the first probe after the container starts
   timeoutSeconds       <integer>  # Probe timeout. 1 second by default, 1 second minimum
   periodSeconds        <integer>  # How often to perform probing. The default is 10 seconds, the minimum is 1 second
   failureThreshold     <integer>  # The number of consecutive probe failures is considered a failure. The default is 3. Minimum value is 1
   successThreshold     <integer>  # How many consecutive detections are successful before it is considered successful. Default is 1

Let’s configure two slightly below to demonstrate the effect:

[root@k8s-master01 ~]# more pod-liveness-httpget.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod-liveness-httpget
  namespace: dev
spec:
  containers:
  - name: nginx
    image: nginx:1.17.1
    ports:
    - name: nginx-port
      containerPort: 80
    livenessProbe:
      httpGet:
        scheme: HTTP
        port: 80 
        path: /
      initialDelaySeconds: 30 # 30s after the container starts to detect
      timeoutSeconds: 5 # Probe timeout is 5s

restart strategy

In the previous section, once there is a problem with container detection, kubernetes will restart the Pod where the container is located. In fact, this is determined by the restart strategy of the pod. There are three kinds of restart strategies for the pod, as follows:

  • Always : When the container fails, the container is automatically restarted, which is also the default value.
  • OnFailure: The container restarts when it terminates and the exit code is not 0
  • Never : Do not restart the container regardless of the state

The restart policy applies to all containers in the pod object. The container that needs to be restarted for the first time will be restarted immediately when it needs to be restarted, and the operation that needs to be restarted again will be delayed by kubelet for a period of time, and the delay time of repeated restart operations is based on this 10s, 20s, 40s, 80s, 160s and 300s, 300s is the maximum delay time.

Create pod-restartpolicy.yaml:

apiVersion: v1
kind: Pod
metadata:
  name: pod-restartpolicy
  namespace: dev
spec:
  containers:
  - name: nginx
    image: nginx:1.17.1
    ports:
    - name: nginx-port
      containerPort: 80
    livenessProbe:
      httpGet:
        scheme: HTTP
        port: 80
        path: /hello
  restartPolicy: Never # Set restart policy to Never

run Pod test

# Create pods
[root@k8s-master01 ~]# kubectl create -f pod-restartpolicy.yaml
pod/pod-restartpolicy created

# Check the Pod details and find that the nginx container fails
[root@k8s-master01 ~]# kubectl  describe pods pod-restartpolicy  -n dev
......
  Warning  Unhealthy  15s (x3 over 35s)  kubelet, node1     Liveness probe failed: HTTP probe failed with statuscode: 404
  
  Normal   Killing    15s                kubelet, node1     Container nginx failed liveness probe
  
# Wait for a while, and then observe the number of pod restarts, and find that it has always been 0 and has not restarted   
[root@k8s-master01 ~]# kubectl  get pods pod-restartpolicy -n dev
NAME                   READY   STATUS    RESTARTS   AGE
pod-restartpolicy      0/1     Running   0          5min42s

Tags: Docker Kubernetes Container Cloud Native

Posted by kakki on Sat, 18 Feb 2023 11:58:45 +1030