k8s: detailed explanation of pod controller and service

1. Detailed explanation of pod controller

1.1 introduction to pod controller

Pod is the smallest management unit of kubernetes. In kubernetes, it can be divided into two categories according to the creation method of Pod:

  • Autonomous Pod: a pod directly created by kubernetes. This kind of pod will not exist and will not be rebuilt after deletion
  • Pod created by the controller: the pod created by kubernetes through the controller. After this pod is deleted, it will be rebuilt automatically

What is a Pod controller

The pod controller is the middle layer for managing pods. After using the pod controller, you only need to tell the pod controller how many and what kind of pods you want. It will create pods that meet the conditions and ensure that each pod resource is in the target state expected by the user. If the pod resource fails during operation, it rearranges the pod based on the specified policy.

In kubernetes, there are many types of pod controllers, each of which has its own suitable scenario. The common ones are as follows:

  • ReplicationController: compared with the original pod controller, it has been abandoned and replaced by ReplicaSet
  • ReplicaSet: ensure that the number of replicas is always maintained at the expected value, and support the expansion and reduction of the number of pod s and the upgrade of the image version
  • Deployment: controls Pod by controlling ReplicaSet, and supports rolling upgrade and rollback version
  • The number of peak and valley can be adjusted automatically according to the level of the cluster
  • DaemonSet: runs on the specified Node in the cluster and only runs one copy. It is generally used for the tasks of the daemon class
  • Job: the created pod will exit as soon as the task is completed. It does not need to restart or rebuild. It is used to perform one-time tasks
  • Cronjob: the Pod it creates is responsible for periodic task control and does not need to run continuously in the background
  • Stateful set: manage stateful applications

1.2 ReplicaSet(RS)

The main function of ReplicaSet is to ensure the normal operation of a certain number of pods. It will continuously monitor the operation status of these pods. Once the pod fails, it will restart or rebuild. At the same time, it also supports the expansion and contraction of the number of pods and the upgrading and upgrading of the image version.

Resource manifest file for ReplicaSet:

apiVersion: apps/v1 # Version number
kind: ReplicaSet # type       
metadata: # metadata
  name: # rs name 
  namespace: # Namespace 
  labels: #label
    controller: rs
spec: # Detailed description
  replicas: 3 # Number of copies
  selector: # Selector, which specifies which pod s the controller manages
    matchLabels:      # Labels matching rule
      app: nginx-pod
    matchExpressions: # Expressions matching rule
      - {key: app, operator: In, values: [nginx-pod]}
  template: # Template. When the number of copies is insufficient, a pod copy will be created according to the following template
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.1
        ports:
        - containerPort: 80

Here, the configuration items that need to be newly understood are the following options of spec:

  • Replicas: Specifies the number of replicas, which is actually the number of pod s created by the current rs. the default is 1
  • Selector: selector. Its function is to establish the relationship between pod controller and pod. The Label Selector mechanism is adopted. Define label on the pod template and selector on the controller to indicate which pods the current controller can manage
  • Template: template is the template used by the current controller to create a pod. In fact, it is the definition of pod learned in the previous chapter

Create ReplicaSet
Create PC replicaset Yaml file, as follows:

apiVersion: apps/v1
kind: ReplicaSet   
metadata:
  name: pc-replicaset
  namespace: dev
spec:
  replicas: 3
  selector: 
    matchLabels:
      app: nginx-pod
  template:
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.1
# Create rs
[root@master ~]# kubectl create -f pc-replicaset.yaml
replicaset.apps/pc-replicaset created

# View rs
# Specified: expected number of copies  
# CURRENT: CURRENT copy quantity  
# READY: number of copies READY for service
[root@master ~]# kubectl get rs pc-replicaset -n dev -o wide
NAME          DESIRED   CURRENT READY AGE   CONTAINERS   IMAGES             SELECTOR
pc-replicaset 3         3       3     22s   nginx        nginx:1.17.1       app=nginx-pod

# View the pod created by the current controller
# It is found that the name of the pod created by the controller is spliced with - xxxxx random code after the controller name
[root@master ~]# kubectl get pod -n dev
NAME                          READY   STATUS    RESTARTS   AGE
pc-replicaset-6vmvt   1/1     Running   0          54s
pc-replicaset-fmb8f   1/1     Running   0          54s
pc-replicaset-snrk2   1/1     Running   0          54s

Expansion and contraction capacity

# Edit the number of copies of rs and modify spec:replicas: 6
[root@master ~]# kubectl edit rs pc-replicaset -n dev
replicaset.apps/pc-replicaset edited

# View pod
[root@master ~]# kubectl get pods -n dev
NAME                          READY   STATUS    RESTARTS   AGE
pc-replicaset-6vmvt   1/1     Running   0          114m
pc-replicaset-cftnp   1/1     Running   0          10s
pc-replicaset-fjlm6   1/1     Running   0          10s
pc-replicaset-fmb8f   1/1     Running   0          114m
pc-replicaset-s2whj   1/1     Running   0          10s
pc-replicaset-snrk2   1/1     Running   0          114m

# Of course, you can also use commands directly
# Use the scale command to expand and shrink the capacity, and then -- replicas=n directly specify the target quantity
[root@master ~]# kubectl scale rs pc-replicaset --replicas=2 -n dev
replicaset.apps/pc-replicaset scaled

# After the command is run, check it immediately and find that there are 4 ready to exit
[root@master ~]# kubectl get pods -n dev
NAME                       READY   STATUS        RESTARTS   AGE
pc-replicaset-6vmvt   0/1     Terminating   0          118m
pc-replicaset-cftnp   0/1     Terminating   0          4m17s
pc-replicaset-fjlm6   0/1     Terminating   0          4m17s
pc-replicaset-fmb8f   1/1     Running       0          118m
pc-replicaset-s2whj   0/1     Terminating   0          4m17s
pc-replicaset-snrk2   1/1     Running       0          118m

#Wait a moment, there are only two left
[root@master ~]# kubectl get pods -n dev
NAME                       READY   STATUS    RESTARTS   AGE
pc-replicaset-fmb8f   1/1     Running   0          119m
pc-replicaset-snrk2   1/1     Running   0          119m

Image upgrade

# Edit container image of rs - image: nginx:1.17.2
[root@master ~]# kubectl edit rs pc-replicaset -n dev
replicaset.apps/pc-replicaset edited

# Check again and find that the image version has changed
[root@master ~]# kubectl get rs -n dev -o wide
NAME                DESIRED  CURRENT   READY   AGE    CONTAINERS   IMAGES        ...
pc-replicaset       2        2         2       140m   nginx         nginx:1.17.2  ...

# In the same way, you can use commands to complete this work
# Kubectl set image RS name container = image version - n namespace
[root@master ~]# kubectl set image rs pc-replicaset nginx=nginx:1.17.1  -n dev
replicaset.apps/pc-replicaset image updated

# Check again and find that the image version has changed
[root@master ~]# kubectl get rs -n dev -o wide
NAME                 DESIRED  CURRENT   READY   AGE    CONTAINERS   IMAGES            ...
pc-replicaset        2        2         2       145m   nginx        nginx:1.17.1 ... 

Delete ReplicaSet

# Using the kubectl delete command will delete this RS and the Pod it manages
# Before kubernetes deletes RS, the replicasclear of RS will be adjusted to 0. After all pods are deleted, the RS object will be deleted
[root@master ~]# kubectl delete rs pc-replicaset -n dev
replicaset.apps "pc-replicaset" deleted
[root@master ~]# kubectl get pod -n dev -o wide
No resources found in dev namespace.

# If you want to delete only RS objects (keep Pod), you can add -- cascade=false option when using kubectl delete command (not recommended).
[root@master ~]# kubectl delete rs pc-replicaset -n dev --cascade=false
replicaset.apps "pc-replicaset" deleted
[root@master ~]# kubectl get pods -n dev
NAME                  READY   STATUS    RESTARTS   AGE
pc-replicaset-cl82j   1/1     Running   0          75s
pc-replicaset-dslhb   1/1     Running   0          75s

# You can also use yaml to delete directly (recommended)
[root@master ~]# kubectl delete -f pc-replicaset.yaml
replicaset.apps "pc-replicaset" deleted

1.3 Deployment(Deploy)

In order to better solve the problem of service orchestration, kubernetes is in V1 Since version 2, the Deployment controller has been introduced. It is worth mentioning that this controller does not directly manage pod, but introduces the management of pod by managing ReplicaSet, that is, Deployment manages ReplicaSet and ReplicaSet manages pod. Therefore, Deployment is more powerful than ReplicaSet.

The main functions of Deployment are as follows:

  • All features of ReplicaSet are supported
  • Support the stop and continue of publishing
  • Support rolling upgrade and rollback version

Resource manifest file of Deployment:

apiVersion: apps/v1 # Version number
kind: Deployment # type       
metadata: # metadata
  name: # rs name 
  namespace: # Namespace 
  labels: #label
    controller: deploy
spec: # Detailed description
  replicas: 3 # Number of copies
  revisionHistoryLimit: 3 # Keep historical version
  paused: false # Pause deployment. The default is false
  progressDeadlineSeconds: 600 # Deployment timeout (s), the default is 600
  strategy: # strategy
    type: RollingUpdate # Rolling update strategy
    rollingUpdate: # Rolling update
      maxSurge: 30% # The maximum number of copies that can exist, either as a percentage or as an integer
      maxUnavailable: 30% # The maximum value of the Pod in the maximum unavailable state, which can be either a percentage or an integer
  selector: # Selector, which specifies which pod s the controller manages
    matchLabels:      # Labels matching rule
      app: nginx-pod
    matchExpressions: # Expressions matching rule
      - {key: app, operator: In, values: [nginx-pod]}
  template: # Template. When the number of copies is insufficient, a pod copy will be created according to the following template
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.1
        ports:
        - containerPort: 80

Create deployment

Create PC deployment Yaml, as follows:

apiVersion: apps/v1
kind: Deployment      
metadata:
  name: pc-deployment
  namespace: dev
spec: 
  replicas: 3
  selector:
    matchLabels:
      app: nginx-pod
  template:
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.1
# Create deployment
[root@master ~]# kubectl create -f pc-deployment.yaml --record=true
deployment.apps/pc-deployment created

# View deployment
# UP-TO-DATE number of pod s of the latest version
# AVAILABLE number of currently AVAILABLE pod s
[root@master ~]# kubectl get deploy pc-deployment -n dev
NAME            READY   UP-TO-DATE   AVAILABLE   AGE
pc-deployment   3/3     3            3           15s

# View rs
# It is found that the name of rs is a 10 digit random string added to the name of the original deployment
[root@master ~]# kubectl get rs -n dev
NAME                       DESIRED   CURRENT   READY   AGE
pc-deployment-6696798b78   3         3         3       23s

# View pod
[root@master ~]# kubectl get pods -n dev
NAME                             READY   STATUS    RESTARTS   AGE
pc-deployment-6696798b78-d2c8n   1/1     Running   0          107s
pc-deployment-6696798b78-smpvp   1/1     Running   0          107s
pc-deployment-6696798b78-wvjd8   1/1     Running   0          107s

Expansion and contraction capacity

# The number of changed copies is 5
[root@master ~]# kubectl scale deploy pc-deployment --replicas=5  -n dev
deployment.apps/pc-deployment scaled

# View deployment
[root@master ~]# kubectl get deploy pc-deployment -n dev
NAME            READY   UP-TO-DATE   AVAILABLE   AGE
pc-deployment   5/5     5            5           2m

# View pod
[root@master ~]#  kubectl get pods -n dev
NAME                             READY   STATUS    RESTARTS   AGE
pc-deployment-6696798b78-d2c8n   1/1     Running   0          4m19s
pc-deployment-6696798b78-jxmdq   1/1     Running   0          94s
pc-deployment-6696798b78-mktqv   1/1     Running   0          93s
pc-deployment-6696798b78-smpvp   1/1     Running   0          4m19s
pc-deployment-6696798b78-wvjd8   1/1     Running   0          4m19s

# Edit the number of copies of deployment and modify spec:replicas: 4
[root@master ~]# kubectl edit deploy pc-deployment -n dev
deployment.apps/pc-deployment edited

# View pod
[root@master ~]# kubectl get pods -n dev
NAME                             READY   STATUS    RESTARTS   AGE
pc-deployment-6696798b78-d2c8n   1/1     Running   0          5m23s
pc-deployment-6696798b78-jxmdq   1/1     Running   0          2m38s
pc-deployment-6696798b78-smpvp   1/1     Running   0          5m23s
pc-deployment-6696798b78-wvjd8   1/1     Running   0          5m23s

Mirror update
deployment supports two update strategies: rebuild update and rolling update. You can specify the policy type through strategy and support two attributes:

strategy: Specify a new Pod Replace old Pod The policy supports two properties:
  type: Specify the policy type and support two policies
    Recreate: Create a new Pod We'll kill all the existing before Pod
    RollingUpdate: Rolling update is to kill a part and start a part. During the update process, there are two versions Pod
  rollingUpdate: When type by RollingUpdate Effective when, used for RollingUpdate Setting parameters supports two properties:
    maxUnavailable: Used to specify not available during upgrade Pod The maximum quantity of is 25 by default%. 
    maxSurge:  Used to specify that you can exceed expectations during the upgrade process Pod The maximum quantity of is 25 by default%. 

Rebuild update

  1. Edit PC deployment Yaml, add the update strategy under the spec node
spec:
  strategy: # strategy
    type: Recreate # Rebuild update
  1. Create deploy for validation
# Change image
[root@master ~]# kubectl set image deployment pc-deployment nginx=nginx:1.17.2 -n dev
deployment.apps/pc-deployment image updated

# Observe the upgrade process
[root@master ~]#  kubectl get pods -n dev -w
NAME                             READY   STATUS    RESTARTS   AGE
pc-deployment-5d89bdfbf9-65qcw   1/1     Running   0          31s
pc-deployment-5d89bdfbf9-w5nzv   1/1     Running   0          31s
pc-deployment-5d89bdfbf9-xpt7w   1/1     Running   0          31s

pc-deployment-5d89bdfbf9-xpt7w   1/1     Terminating   0          41s
pc-deployment-5d89bdfbf9-65qcw   1/1     Terminating   0          41s
pc-deployment-5d89bdfbf9-w5nzv   1/1     Terminating   0          41s

pc-deployment-675d469f8b-grn8z   0/1     Pending       0          0s
pc-deployment-675d469f8b-hbl4v   0/1     Pending       0          0s
pc-deployment-675d469f8b-67nz2   0/1     Pending       0          0s

pc-deployment-675d469f8b-grn8z   0/1     ContainerCreating   0          0s
pc-deployment-675d469f8b-hbl4v   0/1     ContainerCreating   0          0s
pc-deployment-675d469f8b-67nz2   0/1     ContainerCreating   0          0s

pc-deployment-675d469f8b-grn8z   1/1     Running             0          1s
pc-deployment-675d469f8b-67nz2   1/1     Running             0          1s
pc-deployment-675d469f8b-hbl4v   1/1     Running             0          2s

Rolling update

  1. Edit PC deployment Yaml, add the update strategy under the spec node
spec:
  strategy: # strategy
    type: RollingUpdate # Rolling update strategy
    rollingUpdate:
      maxSurge: 25% 
      maxUnavailable: 25%
  1. Create deploy for validation
# Change image
[root@master ~]# kubectl set image deployment pc-deployment nginx=nginx:1.17.3 -n dev
deployment.apps/pc-deployment image updated

# Observe the upgrade process
[root@master ~]# kubectl get pods -n dev -w
NAME                           READY   STATUS    RESTARTS   AGE
pc-deployment-c848d767-8rbzt   1/1     Running   0          31m
pc-deployment-c848d767-h4p68   1/1     Running   0          31m
pc-deployment-c848d767-hlmz4   1/1     Running   0          31m
pc-deployment-c848d767-rrqcn   1/1     Running   0          31m

pc-deployment-966bf7f44-226rx   0/1     Pending             0          0s
pc-deployment-966bf7f44-226rx   0/1     ContainerCreating   0          0s
pc-deployment-966bf7f44-226rx   1/1     Running             0          1s
pc-deployment-c848d767-h4p68    0/1     Terminating         0          34m

pc-deployment-966bf7f44-cnd44   0/1     Pending             0          0s
pc-deployment-966bf7f44-cnd44   0/1     ContainerCreating   0          0s
pc-deployment-966bf7f44-cnd44   1/1     Running             0          2s
pc-deployment-c848d767-hlmz4    0/1     Terminating         0          34m

pc-deployment-966bf7f44-px48p   0/1     Pending             0          0s
pc-deployment-966bf7f44-px48p   0/1     ContainerCreating   0          0s
pc-deployment-966bf7f44-px48p   1/1     Running             0          0s
pc-deployment-c848d767-8rbzt    0/1     Terminating         0          34m

pc-deployment-966bf7f44-dkmqp   0/1     Pending             0          0s
pc-deployment-966bf7f44-dkmqp   0/1     ContainerCreating   0          0s
pc-deployment-966bf7f44-dkmqp   1/1     Running             0          2s
pc-deployment-c848d767-rrqcn    0/1     Terminating         0          34m

# So far, the new version of pod has been created and the version of pod has been destroyed
# The intermediate process is rolling, that is, creating while destroying

Rolling update process:

Changes in rs image updates:

# Looking at rs, it is found that the original rs still exists, but the number of pods becomes 0, and then a new rs is generated, and the number of pods is 4
# In fact, this is the secret of version fallback in deployment, which will be explained in detail later
[root@master ~]# kubectl get rs -n dev
NAME                       DESIRED   CURRENT   READY   AGE
pc-deployment-6696798b78   0         0         0       7m37s
pc-deployment-6696798b11   0         0         0       5m37s
pc-deployment-c848d76789   4         4         4       72s

Version fallback

deployment supports many functions such as pause, resume and version fallback in the process of version upgrade, as shown below
kubectl rollout: functions related to version upgrade. The following options are supported:

  • Status: displays the current upgrade status
  • History: displays the upgrade history
  • Pause: pause the version upgrade process
  • Resume: resume the suspended version upgrade process
  • Restart: restart the version upgrade process
  • undo: rollback to the previous version (you can use -- to revision to rollback to the specified version)
# View the status of the current upgraded version
[root@master ~]# kubectl rollout status deploy pc-deployment -n dev
deployment "pc-deployment" successfully rolled out

# View upgrade history
[root@master ~]# kubectl rollout history deploy pc-deployment -n dev
deployment.apps/pc-deployment
REVISION  CHANGE-CAUSE
1         kubectl create --filename=pc-deployment.yaml --record=true
2         kubectl create --filename=pc-deployment.yaml --record=true
3         kubectl create --filename=pc-deployment.yaml --record=true
# It can be found that there are three version records, indicating that two upgrades have been completed

# Version rollback
# Here, use -- to revision = 1 directly to roll back to version 1. If you omit this option, you will roll back to the previous version, which is version 2
[root@master ~]# kubectl rollout undo deployment pc-deployment --to-revision=1 -n dev
deployment.apps/pc-deployment rolled back

# The first version can be found through nginx image version
[root@master ~]# kubectl get deploy -n dev -o wide
NAME            READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES         
pc-deployment   4/4     4            4           74m   nginx        nginx:1.17.1   

# Looking at rs, it is found that there are four pod s running in the first rs, and the latter two versions of rs are running
# In fact, the reason why deployment can roll back the version is to record the history rs,
# Once you want to roll back the number of pods to the current version, you can only roll back the number of pods to the current version
[root@master ~]# kubectl get rs -n dev
NAME                       DESIRED   CURRENT   READY   AGE
pc-deployment-6696798b78   4         4         4       78m
pc-deployment-966bf7f44    0         0         0       37m
pc-deployment-c848d767     0         0         0       71m

Canary release
The Deployment controller supports controlling the update process, such as "pause" or "resume" update operations.
For example, after a batch of new Pod resources are created, the update process is suspended immediately. At this time, there are only some new versions of applications, and the main part is still the old version. Then, filter a small number of user requests to route to the new version of Pod application, and continue to observe whether it can run stably in the desired way. After confirming that there is no problem, continue to complete the rolling update of the remaining Pod resources, otherwise roll back the update operation immediately. This is the so-called Canary release.

# Update the version of deployment and configure to suspend deployment
[root@master ~]#  kubectl set image deploy pc-deployment nginx=nginx:1.17.4 -n dev && kubectl rollout pause deployment pc-deployment  -n dev
deployment.apps/pc-deployment image updated
deployment.apps/pc-deployment paused

#Observe update status
[root@master ~]# kubectl rollout status deploy pc-deployment -n dev 
Waiting for deployment "pc-deployment" rollout to finish: 2 out of 4 new replicas have been updated...

# Monitoring the update process, we can see that a new resource has been added, but an old resource has not been deleted according to the expected state, because the pause command is used

[root@master ~]# kubectl get rs -n dev -o wide
NAME                       DESIRED   CURRENT   READY   AGE     CONTAINERS   IMAGES         
pc-deployment-5d89bdfbf9   3         3         3       19m     nginx        nginx:1.17.1   
pc-deployment-675d469f8b   0         0         0       14m     nginx        nginx:1.17.2   
pc-deployment-6c9f56fcfb   2         2         2       3m16s   nginx        nginx:1.17.4   
[root@master ~]# kubectl get pods -n dev
NAME                             READY   STATUS    RESTARTS   AGE
pc-deployment-5d89bdfbf9-rj8sq   1/1     Running   0          7m33s
pc-deployment-5d89bdfbf9-ttwgg   1/1     Running   0          7m35s
pc-deployment-5d89bdfbf9-v4wvc   1/1     Running   0          7m34s
pc-deployment-6c9f56fcfb-996rt   1/1     Running   0          3m31s
pc-deployment-6c9f56fcfb-j2gtj   1/1     Running   0          3m31s

# Make sure the updated pod is OK and continue to update
[root@master ~]# kubectl rollout resume deploy pc-deployment -n dev
deployment.apps/pc-deployment resumed

# View the last update
[root@master ~]# kubectl get rs -n dev -o wide
NAME                       DESIRED   CURRENT   READY   AGE     CONTAINERS   IMAGES         
pc-deployment-5d89bdfbf9   0         0         0       21m     nginx        nginx:1.17.1   
pc-deployment-675d469f8b   0         0         0       16m     nginx        nginx:1.17.2   
pc-deployment-6c9f56fcfb   4         4         4       5m11s   nginx        nginx:1.17.4   

[root@master ~]# kubectl get pods -n dev
NAME                             READY   STATUS    RESTARTS   AGE
pc-deployment-6c9f56fcfb-7bfwh   1/1     Running   0          37s
pc-deployment-6c9f56fcfb-996rt   1/1     Running   0          5m27s
pc-deployment-6c9f56fcfb-j2gtj   1/1     Running   0          5m27s
pc-deployment-6c9f56fcfb-rf84v   1/1     Running   0          37s

Delete Deployment

# Delete the deployment, and the rs and pod under it will also be deleted
[root@master ~]# kubectl delete -f pc-deployment.yaml
deployment.apps "pc-deployment" deleted

1.4 Horizontal Pod Autoscaler(HPA)

Previously, we can manually execute kubectl scale command to realize pod capacity expansion or reduction, but this obviously does not meet the positioning goal of kubernetes - automation and intelligence. Kubernetes expects to realize the automatic adjustment of the number of pods by monitoring the use of pods, so a controller such as Horizontal Pod Autoscaler (HPA) is produced.
HPA can obtain the utilization rate of each Pod, then compare it with the indicators defined in HPA, calculate the specific value that needs to be scaled, and finally adjust the number of pods. In fact, like the previous Deployment, HPA also belongs to a Kubernetes resource object. It tracks and analyzes the load changes of all target pods controlled by RC to determine whether it is necessary to adjust the number of copies of the target Pod. This is the implementation principle of HPA.

Next, let's do an experiment

1. Install metrics server

Metrics server can be used to collect resource usage in a cluster

# Install git
[root@master ~]# yum install git -y
# Get the metrics server and pay attention to the version used
[root@master ~]# git clone -b v0.3.6 https://github.com/kubernetes-incubator/metrics-server
# When modifying the deployment, note that the image and initialization parameters are modified
[root@master ~]# cd /root/metrics-server/deploy/1.8+/
[root@master 1.8+]# vim metrics-server-deployment.yaml
 Add the following options as shown in the figure
hostNetwork: true
image: registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.6
args:
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
# Install metrics server
[root@master 1.8+]# kubectl apply -f ./

# Check the operation of pod
[root@master 1.8+]# kubectl get pod -n kube-system
metrics-server-6b976979db-2xwbj   1/1     Running   0          90s

# Use kubectl top node to view resource usage
[root@master 1.8+]# kubectl top node
NAME     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
master   98m          4%     1067Mi          62%
node1    27m          1%     727Mi           42%
node2    34m          1%     800Mi           46%
[root@master 1.8+]# kubectl top pod -n kube-system
NAME                              CPU(cores)   MEMORY(bytes)
coredns-6955765f44-7ptsb          3m           9Mi
coredns-6955765f44-vcwr5          3m           8Mi
etcd-master                       14m          145Mi
...
# At this point, the metrics server installation is complete

2. Prepare deployment and servie

For easy operation, use the command directly

# Create deployment 
[root@master 1.8+]# kubectl run nginx --image=nginx:latest --requests=cpu=100m -n dev
# Create service
[root@master 1.8+]# kubectl expose deployment nginx --type=NodePort --port=80 -n dev

# see
[root@master 1.8+]# kubectl get deployment,pod,svc -n dev
NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx   1/1     1            1           47s

NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-7df9756ccc-bh8dr   1/1     Running   0          47s

NAME            TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/nginx   NodePort   10.109.57.248   <none>        80:31136/TCP   35s

3 deploy HPA

Create PC HPA yaml

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: pc-hpa
  namespace: dev
spec:
  minReplicas: 1  #Minimum pod quantity
  maxReplicas: 10 #Maximum number of pod s
  targetCPUUtilizationPercentage: 3 # CPU utilization index
  scaleTargetRef:   # Specify the nginx information to control
    apiVersion: apps/v1
    kind: Deployment  
    name: nginx  
# Create hpa
[root@master 1.8+]# kubectl create -f pc-hpa.yaml
horizontalpodautoscaler.autoscaling/pc-hpa created

# View hpa
[root@master 1.8+]# kubectl get hpa -n dev
NAME     REFERENCE          TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
pc-hpa   Deployment/nginx   0%/3%     1         10        1          62s

4 test

Use the pressure measurement tool to measure the service address 192.168.109.100:31136, and then check the changes of hpa and pod through the console

hpa change

[root@master ~]# kubectl get hpa -n dev -w
NAME     REFERENCE          TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
pc-hpa   Deployment/nginx   0%/3%     1         10        1          4m11s
pc-hpa   Deployment/nginx   0%/3%     1         10        1          5m19s
pc-hpa   Deployment/nginx   22%/3%    1         10        1          6m50s
pc-hpa   Deployment/nginx   22%/3%    1         10        4          7m5s
pc-hpa   Deployment/nginx   22%/3%    1         10        8          7m21s
pc-hpa   Deployment/nginx   6%/3%     1         10        8          7m51s
pc-hpa   Deployment/nginx   0%/3%     1         10        8          9m6s
pc-hpa   Deployment/nginx   0%/3%     1         10        8          13m
pc-hpa   Deployment/nginx   0%/3%     1         10        1          14m

deployment change

[root@master ~]# kubectl get deployment -n dev -w
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           11m
nginx   1/4     1            1           13m
nginx   1/4     1            1           13m
nginx   1/4     1            1           13m
nginx   1/4     4            1           13m
nginx   1/8     4            1           14m
nginx   1/8     4            1           14m
nginx   1/8     4            1           14m
nginx   1/8     8            1           14m
nginx   2/8     8            2           14m
nginx   3/8     8            3           14m
nginx   4/8     8            4           14m
nginx   5/8     8            5           14m
nginx   6/8     8            6           14m
nginx   7/8     8            7           14m
nginx   8/8     8            8           15m
nginx   8/1     8            8           20m
nginx   8/1     8            8           20m
nginx   1/1     1            1           20m

pod change

[root@master ~]# kubectl get pods -n dev -w
NAME                     READY   STATUS    RESTARTS   AGE
nginx-7df9756ccc-bh8dr   1/1     Running   0          11m
nginx-7df9756ccc-cpgrv   0/1     Pending   0          0s
nginx-7df9756ccc-8zhwk   0/1     Pending   0          0s
nginx-7df9756ccc-rr9bn   0/1     Pending   0          0s
nginx-7df9756ccc-cpgrv   0/1     ContainerCreating   0          0s
nginx-7df9756ccc-8zhwk   0/1     ContainerCreating   0          0s
nginx-7df9756ccc-rr9bn   0/1     ContainerCreating   0          0s
nginx-7df9756ccc-m9gsj   0/1     Pending             0          0s
nginx-7df9756ccc-g56qb   0/1     Pending             0          0s
nginx-7df9756ccc-sl9c6   0/1     Pending             0          0s
nginx-7df9756ccc-fgst7   0/1     Pending             0          0s
nginx-7df9756ccc-g56qb   0/1     ContainerCreating   0          0s
nginx-7df9756ccc-m9gsj   0/1     ContainerCreating   0          0s
nginx-7df9756ccc-sl9c6   0/1     ContainerCreating   0          0s
nginx-7df9756ccc-fgst7   0/1     ContainerCreating   0          0s
nginx-7df9756ccc-8zhwk   1/1     Running             0          19s
nginx-7df9756ccc-rr9bn   1/1     Running             0          30s
nginx-7df9756ccc-m9gsj   1/1     Running             0          21s
nginx-7df9756ccc-cpgrv   1/1     Running             0          47s
nginx-7df9756ccc-sl9c6   1/1     Running             0          33s
nginx-7df9756ccc-g56qb   1/1     Running             0          48s
nginx-7df9756ccc-fgst7   1/1     Running             0          66s
nginx-7df9756ccc-fgst7   1/1     Terminating         0          6m50s
nginx-7df9756ccc-8zhwk   1/1     Terminating         0          7m5s
nginx-7df9756ccc-cpgrv   1/1     Terminating         0          7m5s
nginx-7df9756ccc-g56qb   1/1     Terminating         0          6m50s
nginx-7df9756ccc-rr9bn   1/1     Terminating         0          7m5s
nginx-7df9756ccc-m9gsj   1/1     Terminating         0          6m50s
nginx-7df9756ccc-sl9c6   1/1     Terminating         0          6m50s

1.5 DaemonSet(DS)

A DaemonSet type controller can ensure that a replica runs on each (or specified) node in the cluster. It is generally applicable to log collection, node monitoring and other scenarios. That is to say, if the function provided by a Pod is at the node level (each node needs and only needs one), then this kind of Pod is suitable to be created with a controller of DaemonSet type.

Features of DaemonSet controller:

  • Whenever a node is added to the cluster, the specified Pod replica is also added to the node
  • When a node is removed from the cluster, the Pod is garbage collected

Let's first look at the resource manifest file of DaemonSet

apiVersion: apps/v1 # Version number
kind: DaemonSet # type       
metadata: # metadata
  name: # rs name 
  namespace: # Namespace 
  labels: #label
    controller: daemonset
spec: # Detailed description
  revisionHistoryLimit: 3 # Keep historical version
  updateStrategy: # Update strategy
    type: RollingUpdate # Rolling update strategy
    rollingUpdate: # Rolling update
      maxUnavailable: 1 # The maximum value of the Pod in the maximum unavailable state, which can be either a percentage or an integer
  selector: # Selector, which specifies which pod s the controller manages
    matchLabels:      # Labels matching rule
      app: nginx-pod
    matchExpressions: # Expressions matching rule
      - {key: app, operator: In, values: [nginx-pod]}
  template: # Template. When the number of copies is insufficient, a pod copy will be created according to the following template
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.1
        ports:
        - containerPort: 80

Create PC daemon Yaml, as follows:

apiVersion: apps/v1
kind: DaemonSet      
metadata:
  name: pc-daemonset
  namespace: dev
spec: 
  selector:
    matchLabels:
      app: nginx-pod
  template:
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.1
# Create daemonset
[root@master ~]# kubectl create -f  pc-daemonset.yaml
daemonset.apps/pc-daemonset created

# View daemonset
[root@master ~]#  kubectl get ds -n dev -o wide
NAME        DESIRED  CURRENT  READY  UP-TO-DATE  AVAILABLE   AGE   CONTAINERS   IMAGES         
pc-daemonset   2        2        2      2           2        24s   nginx        nginx:1.17.1   

# Check the pod and find that a pod is running on each Node
[root@master ~]#  kubectl get pods -n dev -o wide
NAME                 READY   STATUS    RESTARTS   AGE   IP            NODE    
pc-daemonset-9bck8   1/1     Running   0          37s   10.244.1.43   node1     
pc-daemonset-k224w   1/1     Running   0          37s   10.244.2.74   node2      

# Delete daemonset
[root@master ~]# kubectl delete -f pc-daemonset.yaml
daemonset.apps "pc-daemonset" deleted

1.6 Job

Job is mainly used for batch processing (processing a specified number of tasks at a time) and short-term one-time (each task runs only once and ends). Job features are as follows:

  • When the execution of the pod created by the Job is completed successfully, the Job will record the number of successfully completed pods
  • When the number of successfully completed pod s reaches the specified number, the Job will complete the execution

Job resource list file:

apiVersion: batch/v1 # Version number
kind: Job # type       
metadata: # metadata
  name: # rs name 
  namespace: # Namespace 
  labels: #label
    controller: job
spec: # Detailed description
  completions: 1 # Specifies the number of times a job needs to run Pods successfully. Default: 1
  parallelism: 1 # Specifies the number of Pods that a job should run concurrently at any one time. Default: 1
  activeDeadlineSeconds: 30 # Specify the time period within which the job can run. If it is not over, the system will try to terminate it.
  backoffLimit: 6 # Specifies the number of retries after a job fails. The default is 6
  manualSelector: true # Can I use the selector to select pod? The default is false
  selector: # Selector, which specifies which pod s the controller manages
    matchLabels:      # Labels matching rule
      app: counter-pod
    matchExpressions: # Expressions matching rule
      - {key: app, operator: In, values: [counter-pod]}
  template: # Template. When the number of copies is insufficient, a pod copy will be created according to the following template
    metadata:
      labels:
        app: counter-pod
    spec:
      restartPolicy: Never # The restart policy can only be set to Never or OnFailure
      containers:
      - name: counter
        image: busybox:1.30
        command: ["bin/sh","-c","for i in 9 8 7 6 5 4 3 2 1; do echo $i;sleep 2;done"]
Description of restart policy settings:
    If specified as OnFailure,be job Will be pod Restart the container on failure instead of creating pod,failed Times unchanged
    If specified as Never,be job Will be pod Create a new on failure pod,And fault pod It won't disappear or restart, failed Times plus 1
    If specified as Always If so, it means to restart all the time, which means job The task will be executed repeatedly. Of course, it is wrong, so it cannot be set to Always

Create PC job Yaml, as follows:

apiVersion: batch/v1
kind: Job      
metadata:
  name: pc-job
  namespace: dev
spec:
  manualSelector: true
  selector:
    matchLabels:
      app: counter-pod
  template:
    metadata:
      labels:
        app: counter-pod
    spec:
      restartPolicy: Never
      containers:
      - name: counter
        image: busybox:1.30
        command: ["bin/sh","-c","for i in 9 8 7 6 5 4 3 2 1; do echo $i;sleep 3;done"]
# Create job
[root@master ~]# kubectl create -f pc-job.yaml
job.batch/pc-job created

# View job
[root@master ~]# kubectl get job -n dev -o wide  -w
NAME     COMPLETIONS   DURATION   AGE   CONTAINERS   IMAGES         SELECTOR
pc-job   0/1           21s        21s   counter      busybox:1.30   app=counter-pod
pc-job   1/1           31s        79s   counter      busybox:1.30   app=counter-pod

# By observing the status of the pod, you can see that the pod will change to the Completed status after the task is Completed
[root@master ~]# kubectl get pods -n dev -w
NAME           READY   STATUS     RESTARTS      AGE
pc-job-rxg96   1/1     Running     0            29s
pc-job-rxg96   0/1     Completed   0            33s

# Next, adjust the total number and parallel number of pod runs, that is, set the following two options under spec
#  completions: 6 # The number of times that the specified job needs to run Pods successfully is 6
#  parallelism: 3 # Specify that the number of job s running Pods concurrently is 3
#  Then re run the job and observe the effect. At this time, it will be found that the job will run 3 pods at a time, and a total of 6 pods have been executed
[root@master ~]# kubectl get pods -n dev -w
NAME           READY   STATUS    RESTARTS   AGE
pc-job-684ft   1/1     Running   0          5s
pc-job-jhj49   1/1     Running   0          5s
pc-job-pfcvh   1/1     Running   0          5s
pc-job-684ft   0/1     Completed   0          11s
pc-job-v7rhr   0/1     Pending     0          0s
pc-job-v7rhr   0/1     Pending     0          0s
pc-job-v7rhr   0/1     ContainerCreating   0          0s
pc-job-jhj49   0/1     Completed           0          11s
pc-job-fhwf7   0/1     Pending             0          0s
pc-job-fhwf7   0/1     Pending             0          0s
pc-job-pfcvh   0/1     Completed           0          11s
pc-job-5vg2j   0/1     Pending             0          0s
pc-job-fhwf7   0/1     ContainerCreating   0          0s
pc-job-5vg2j   0/1     Pending             0          0s
pc-job-5vg2j   0/1     ContainerCreating   0          0s
pc-job-fhwf7   1/1     Running             0          2s
pc-job-v7rhr   1/1     Running             0          2s
pc-job-5vg2j   1/1     Running             0          3s
pc-job-fhwf7   0/1     Completed           0          12s
pc-job-v7rhr   0/1     Completed           0          12s
pc-job-5vg2j   0/1     Completed           0          12s

# Delete job
[root@master ~]# kubectl delete -f pc-job.yaml
job.batch "pc-job" deleted

1.7 CronJob(CJ)

CronJob controller takes job controller resources as its control object and manages pod resource objects with it. The job tasks defined by job controller will be executed immediately after its controller resources are created, but CronJob can control its running time point and repeated running mode in a way similar to the periodic task job plan of Linux operating system. In other words, CronJob can run job tasks at a specific point in time (repeatedly).

Resource manifest file of CronJob:

apiVersion: batch/v1beta1 # Version number
kind: CronJob # type       
metadata: # metadata
  name: # rs name 
  namespace: # Namespace 
  labels: #label
    controller: cronjob
spec: # Detailed description
  schedule: # cron format job scheduling run time point, which is used to control when tasks are executed
  concurrencyPolicy: # Concurrent execution policy is used to define whether and how to run the next job when the previous job run has not been completed
  failedJobHistoryLimit: # The number of history records reserved for failed task execution. The default is 1
  successfulJobHistoryLimit: # The number of history records reserved for successful task execution. The default is 3
  startingDeadlineSeconds: # Timeout length of start job error
  jobTemplate: # The job controller template is used to generate job objects for the cronjob controller; The following is actually the definition of job
    metadata:
    spec:
      completions: 1
      parallelism: 1
      activeDeadlineSeconds: 30
      backoffLimit: 6
      manualSelector: true
      selector:
        matchLabels:
          app: counter-pod
        matchExpressions: rule
          - {key: app, operator: In, values: [counter-pod]}
      template:
        metadata:
          labels:
            app: counter-pod
        spec:
          restartPolicy: Never 
          containers:
          - name: counter
            image: busybox:1.30
            command: ["bin/sh","-c","for i in 9 8 7 6 5 4 3 2 1; do echo $i;sleep 20;done"]
Several key options to explain:
schedule: cron Expression that specifies the execution time of the task
	*/1    *      *    *     *
	<minute> <hour> <day> <month> <week>

    minute Values range from 0 to 59.
    hour Values range from 0 to 23.
    day Values range from 1 to 31.
    month Values range from 1 to 12.
    week Values range from 0 to 6, 0 On behalf of Sunday
    Multiple times can be separated by commas; The range can be given with hyphens;*Can be used as wildcards; /Represents each...
concurrencyPolicy:
	Allow:   allow Jobs Concurrent operation(default)
	Forbid:  Concurrent running is prohibited. If the previous run has not been completed, the next run will be skipped
	Replace: Replace, cancel the currently running job and replace it with a new job

Create PC cronjob Yaml, as follows:

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: pc-cronjob
  namespace: dev
  labels:
    controller: cronjob
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    metadata:
    spec:
      template:
        spec:
          restartPolicy: Never
          containers:
          - name: counter
            image: busybox:1.30
            command: ["bin/sh","-c","for i in 9 8 7 6 5 4 3 2 1; do echo $i;sleep 3;done"]
# Create cronjob
[root@master ~]# kubectl create -f pc-cronjob.yaml
cronjob.batch/pc-cronjob created

# View cronjob
[root@master ~]# kubectl get cronjobs -n dev
NAME         SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
pc-cronjob   */1 * * * *   False     0        <none>          6s

# View job
[root@master ~]# kubectl get jobs -n dev
NAME                    COMPLETIONS   DURATION   AGE
pc-cronjob-1592587800   1/1           28s        3m26s
pc-cronjob-1592587860   1/1           28s        2m26s
pc-cronjob-1592587920   1/1           28s        86s

# View pod
[root@master ~]# kubectl get pods -n dev
pc-cronjob-1592587800-x4tsm   0/1     Completed   0          2m24s
pc-cronjob-1592587860-r5gv4   0/1     Completed   0          84s
pc-cronjob-1592587920-9dxxq   1/1     Running     0          24s


# Delete cronjob
[root@master ~]# kubectl  delete -f pc-cronjob.yaml
cronjob.batch "pc-cronjob" deleted

2. Detailed explanation

2.1 Service introduction

In kubernetes, pod is the carrier of application program. We can access the application program through the ip of pod, but the ip address of pod is not fixed, which means that it is not convenient to directly use the ip of pod to access the service. In order to solve this problem, kubernetes provides service resources. Service will aggregate multiple pods that provide the same service and provide a unified entry address. You can access the following pod services by accessing the entry address of the service.

In many cases, service is just a concept. What really works is the Kube proxy service process. A Kube proxy service process is running on each Node. When creating a service, it will write the information of the created service to etcd through API server, and Kube proxy will find the change of this service based on the listening mechanism, and then it will convert the latest service information into the corresponding access rules.

# 10.97.97.97:80 is the access portal provided by the service
# When you visit this portal, you can find that there are three pod services waiting to be called,
# Kube proxy will distribute the request to one of the pod s based on the rr (polling) policy
# This rule will be generated on all nodes in the cluster at the same time, so it can be accessed on any node.
[root@node1 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.97.97.97:80 rr
  -> 10.244.1.39:80               Masq    1      0          0
  -> 10.244.1.40:80               Masq    1      0          0
  -> 10.244.2.33:80               Masq    1      0          0

Kube proxy currently supports three working modes:

userspace mode
In userspace mode, Kube proxy will create a listening port for each Service. The request sent to Cluster IP will be redirected to the listening port of Kube proxy by Iptables rules. Kube proxy will select a Pod providing services according to LB algorithm and establish a link with it to forward the request to Pod.
In this mode, Kube proxy acts as a four layer equalizer. Because Kube proxy runs in userspace, the data copy between kernel and user space will be increased during forwarding processing. Although it is relatively stable, it is inefficient.

iptables mode
In iptables mode, Kube proxy creates corresponding iptables rules for each Pod at the back end of the service, and directly redirects the request sent to the Cluster IP to a Pod IP.
In this mode, Kube proxy does not assume the role of four-tier equalizer, but is only responsible for creating iptables rules. The advantage of this mode is that it is more efficient than the userspace mode, but it cannot provide a flexible LB strategy and cannot retry when the back-end Pod is unavailable.

ipvs mode
ipvs mode is similar to iptables. Kube proxy monitors Pod changes and creates corresponding ipvs rules. ipvs is more efficient than iptables. In addition, ipvs supports more LB algorithms.

# ipvs kernel module must be installed in this mode, otherwise it will be degraded to iptables
# Turn on ipvs
[root@master ~]# kubectl edit cm kube-proxy -n kube-system
[root@master ~]# kubectl delete pod -l k8s-app=kube-proxy -n kube-system
[root@node1 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.97.97.97:80 rr
  -> 10.244.1.39:80               Masq    1      0          0
  -> 10.244.1.40:80               Masq    1      0          0
  -> 10.244.2.33:80               Masq    1      0          0

2.2 Service type

Resource manifest file for Service:

kind: Service  # Resource type
apiVersion: v1  # Resource version
metadata: # metadata
  name: service # Resource name
  namespace: dev # Namespace
spec: # describe
  selector: # Tag selector, which is used to determine which pod s are represented by the current service
    app: nginx
  type: # Service type, which specifies the access method of the service
  clusterIP:  # ip address of virtual service
  sessionAffinity: # session affinity, which supports ClientIP and None
  ports: # port information
    - protocol: TCP 
      port: 3017  # service port
      targetPort: 5003 # pod port
      nodePort: 31122 # Host port
  • ClusterIP: the default value. It is the virtual IP automatically assigned by Kubernetes system and can only be accessed inside the cluster
  • NodePort: expose the Service to the outside through the port on the specified Node. Through this method, you can access the Service outside the cluster
  • LoadBalancer: use the external load balancer to complete the load distribution to the service. Note that this mode needs the support of external cloud environment
  • ExternalName: introduce services outside the cluster into the cluster and use them directly

2.3 Service usage

2.3.1 preparation of experimental environment

Before using the service, first create three pods using Deployment. Note that the tag app=nginx-pod should be set for the pod

Create deployment Yaml, as follows:

apiVersion: apps/v1
kind: Deployment      
metadata:
  name: pc-deployment
  namespace: dev
spec: 
  replicas: 3
  selector:
    matchLabels:
      app: nginx-pod
  template:
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.1
        ports:
        - containerPort: 80
[root@master ~]# kubectl create -f deployment.yaml
deployment.apps/pc-deployment created

# View pod details
[root@master ~]# kubectl get pods -n dev -o wide --show-labels
NAME                             READY   STATUS     IP            NODE     LABELS
pc-deployment-66cb59b984-8p84h   1/1     Running    10.244.1.40   node1    app=nginx-pod
pc-deployment-66cb59b984-vx8vx   1/1     Running    10.244.2.33   node2    app=nginx-pod
pc-deployment-66cb59b984-wnncx   1/1     Running    10.244.1.39   node1    app=nginx-pod

# In order to facilitate the following tests, modify the index of the next three nginx HTML page (the modified IP addresses of the three computers are inconsistent)
# kubectl exec -it pc-deployment-66cb59b984-8p84h -n dev /bin/sh
# echo "10.244.1.40" > /usr/share/nginx/html/index.html

#After modification, access the test
[root@master ~]# curl 10.244.1.40
10.244.1.40
[root@master ~]# curl 10.244.2.33
10.244.2.33
[root@master ~]# curl 10.244.1.39
10.244.1.39

2.3.2 ClusterIP type Service

Create service clusterip Yaml file

apiVersion: v1
kind: Service
metadata:
  name: service-clusterip
  namespace: dev
spec:
  selector:
    app: nginx-pod
  clusterIP: 10.97.97.97 # The ip address of the service. If it is not written, it will be generated by default
  type: ClusterIP
  ports:
  - port: 80  # Service port       
    targetPort: 80 # pod port
# Create service
[root@master ~]# kubectl create -f service-clusterip.yaml
service/service-clusterip created

# View service
[root@master ~]# kubectl get svc -n dev -o wide
NAME                TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE   SELECTOR
service-clusterip   ClusterIP   10.97.97.97   <none>        80/TCP    13s   app=nginx-pod

# View service details
# Here is a list of Endpoints, which are the service entries that the current service can load
[root@master ~]# kubectl describe svc service-clusterip -n dev
Name:              service-clusterip
Namespace:         dev
Labels:            <none>
Annotations:       <none>
Selector:          app=nginx-pod
Type:              ClusterIP
IP:                10.97.97.97
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.1.39:80,10.244.1.40:80,10.244.2.33:80
Session Affinity:  None
Events:            <none>

# View the mapping rules of ipvs
[root@master ~]# ipvsadm -Ln
TCP  10.97.97.97:80 rr
  -> 10.244.1.39:80               Masq    1      0          0
  -> 10.244.1.40:80               Masq    1      0          0
  -> 10.244.2.33:80               Masq    1      0          0

# Visit 10.97.97.97:80 to observe the effect
[root@master ~]# curl 10.97.97.97:80
10.244.2.33

Endpoint
Endpoint is a resource object in kubernetes. It is stored in etcd and used to record the access addresses of all pod s corresponding to a service. It is generated according to the selector description in the service configuration file.
A service consists of a group of pods, which are exposed through endpoints, which are the collection of endpoints that implement the actual service. In other words, the connection between service and pod is realized through endpoints.

Load distribution policy
Access to services is distributed to the back-end Pod. At present, kubernetes provides two load distribution strategies:

  • If it is not defined, the Kube proxy policy is used by default, such as random and polling
  • Session persistence mode based on client address, that is, all requests from the same client will be forwarded to a fixed Pod. This mode can add sessionAffinity:ClientIP option to the spec
# View the mapping rules of ipvs [rr polling]
[root@master ~]# ipvsadm -Ln
TCP  10.97.97.97:80 rr
  -> 10.244.1.39:80               Masq    1      0          0
  -> 10.244.1.40:80               Masq    1      0          0
  -> 10.244.2.33:80               Masq    1      0          0

# Cyclic access test
[root@master ~]# while true;do curl 10.97.97.97:80; sleep 5; done;
10.244.1.40
10.244.1.39
10.244.2.33
10.244.1.40
10.244.1.39
10.244.2.33

# Modify distribution policy - sessionAffinity:ClientIP

# View ipvs rules [persistent stands for persistent]
[root@master ~]# ipvsadm -Ln
TCP  10.97.97.97:80 rr persistent 10800
  -> 10.244.1.39:80               Masq    1      0          0
  -> 10.244.1.40:80               Masq    1      0          0
  -> 10.244.2.33:80               Masq    1      0          0

# Cyclic access test
[root@master ~]# while true;do curl 10.97.97.97; sleep 5; done;
10.244.2.33
10.244.2.33
10.244.2.33
  
# Delete service
[root@master ~]# kubectl delete -f service-clusterip.yaml
service "service-clusterip" deleted

2.3.3 Service of headliner type

In some scenarios, developers may not want to use the load balancing function provided by the service, but want to control the load balancing policy by themselves. In this case, kubernetes provides a headlines service, which does not assign Cluster IP. If they want to access the service, they can only query through the domain name of the service.

Create service headline yaml

apiVersion: v1
kind: Service
metadata:
  name: service-headliness
  namespace: dev
spec:
  selector:
    app: nginx-pod
  clusterIP: None # Set clusterIP to None to create the headline service
  type: ClusterIP
  ports:
  - port: 80    
    targetPort: 80
# Create service
[root@master ~]# kubectl create -f service-headliness.yaml
service/service-headliness created

# Get the service and find that CLUSTER-IP is not allocated
[root@master ~]# kubectl get svc service-headliness -n dev -o wide
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE   SELECTOR
service-headliness   ClusterIP   None         <none>        80/TCP    11s   app=nginx-pod

# View service details
[root@master ~]# kubectl describe svc service-headliness  -n dev
Name:              service-headliness
Namespace:         dev
Labels:            <none>
Annotations:       <none>
Selector:          app=nginx-pod
Type:              ClusterIP
IP:                None
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.1.39:80,10.244.1.40:80,10.244.2.33:80
Session Affinity:  None
Events:            <none>

# Check the resolution of the domain name
[root@master ~]# kubectl exec -it pc-deployment-66cb59b984-8p84h -n dev /bin/sh
/ # cat /etc/resolv.conf
nameserver 10.96.0.10
search dev.svc.cluster.local svc.cluster.local cluster.local

[root@master ~]# dig @10.96.0.10 service-headliness.dev.svc.cluster.local
service-headliness.dev.svc.cluster.local. 30 IN A 10.244.1.40
service-headliness.dev.svc.cluster.local. 30 IN A 10.244.1.39
service-headliness.dev.svc.cluster.local. 30 IN A 10.244.2.33

2.3.4 NodePort type Service

In the previous example, the ip address of the created service can only be accessed inside the cluster. If you want to expose the service to external use, you need to use another type of service, called NodePort. The working principle of NodePort is actually to map the port of the service to a port of the Node, and then you can access the service through NodeIp:NodePort.

Create service nodeport yaml

apiVersion: v1
kind: Service
metadata:
  name: service-nodeport
  namespace: dev
spec:
  selector:
    app: nginx-pod
  type: NodePort # service type
  ports:
  - port: 80
    nodePort: 30002 # Specify the port of the bound node (the default value range is 30000-32767). If it is not specified, it will be assigned by default
    targetPort: 80
# Create service
[root@master ~]# kubectl create -f service-nodeport.yaml
service/service-nodeport created

# View service
[root@master ~]# kubectl get svc -n dev -o wide
NAME               TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)       SELECTOR
service-nodeport   NodePort   10.105.64.191   <none>        80:30002/TCP  app=nginx-pod

# Next, you can access the 30002 port of any nodeip in the cluster through the browser of the host computer to access the pod

2.3.5 LoadBalancer type Service

LoadBalancer and NodePort are very similar. The purpose is to expose a port to the outside. The difference is that LoadBalancer will make a load balancing device outside the cluster. If the device needs the support of the external environment, the requests sent by external services to the device will be loaded by the device and forwarded to the cluster.

2.3.6 Service of externalname type

The service of externalname type is used to introduce services outside the cluster. It specifies the address of an external service through the externalname attribute, and then accesses the service inside the cluster to access the external service.

apiVersion: v1
kind: Service
metadata:
  name: service-externalname
  namespace: dev
spec:
  type: ExternalName # service type
  externalName: www.baidu.com  #You can also change it to an ip address
# Create service
[root@master ~]# kubectl  create -f service-externalname.yaml
service/service-externalname created

# Domain name resolution
[root@master ~]# dig @10.96.0.10 service-externalname.dev.svc.cluster.local
service-externalname.dev.svc.cluster.local. 30 IN CNAME www.baidu.com.
www.baidu.com.          30      IN      CNAME   www.a.shifen.com.
www.a.shifen.com.       30      IN      A       39.156.66.18
www.a.shifen.com.       30      IN      A       39.156.66.14

2.4 introduction to ingress

As mentioned in the previous course, there are two main ways for Service to expose services outside the cluster: NotePort and LoadBalancer, but both of them have certain disadvantages:

  • The disadvantage of NodePort mode is that it will occupy many ports of cluster machines. When there are more cluster services, this disadvantage becomes more and more obvious
  • The disadvantage of LB mode is that each service needs an lb, which is wasteful and troublesome, and requires the support of devices other than kubernetes

Based on this situation, kubernetes provides an ingress resource object. Ingress only needs a NodePort or an LB to meet the needs of exposing multiple services. The working mechanism is roughly shown in the figure below:

In fact, ingress is equivalent to a 7-tier load balancer. It is an abstraction of kubernetes' reverse proxy. Its working principle is similar to Nginx. It can be understood as establishing many mapping rules in ingress. Ingress Controller monitors these configuration rules and converts them into Nginx's reverse proxy configuration, and then provides services to the outside. Here are two core concepts:

  • ingress: an object in kubernetes that defines rules for how requests are forwarded to service s
  • ingress controller: a program that implements reverse proxy and load balancing. It parses the rules defined by ingress and forwards requests according to the configured rules. There are many ways to implement it, such as Nginx, Contour, Haproxy and so on

The working principle of Ingress (taking Nginx as an example) is as follows:

  1. The user writes an Ingress rule to specify which domain name corresponds to which Service in the kubernetes cluster
  2. The Ingress controller dynamically senses the changes of Ingress service rules, and then generates a corresponding Nginx reverse proxy configuration
  3. The Ingress controller will write the generated Nginx configuration to a running Nginx service and update it dynamically
  4. So far, what is really working is an Nginx, which is internally configured with user-defined request forwarding rules

2.5 use of ingress

2.5.1 environmental preparation

Setting up the ingress environment

# create folder
[root@master ~]# mkdir ingress-controller
[root@master ~]# cd ingress-controller/

# Obtain ingress nginx. Version 0.30 is used in this case
[root@master ingress-controller]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml
[root@master ingress-controller]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/provider/baremetal/service-nodeport.yaml

# Modify mandatory Warehouse in yaml file
# Modify quay io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
# Quay mirror qiniu. com/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
# Create ingress nginx
[root@master ingress-controller]# kubectl apply -f ./

# View ingress nginx
[root@master ingress-controller]# kubectl get pod -n ingress-nginx
NAME                                           READY   STATUS    RESTARTS   AGE
pod/nginx-ingress-controller-fbf967dd5-4qpbp   1/1     Running   0          12h

# View service
[root@master ingress-controller]# kubectl get svc -n ingress-nginx
NAME            TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx   NodePort   10.98.75.163   <none>        80:32240/TCP,443:31335/TCP   11h

Prepare service and pod

For the convenience of later experiments, create the model shown in the figure below

Create Tomcat nginx yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: dev
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx-pod
  template:
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.1
        ports:
        - containerPort: 80

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: tomcat-deployment
  namespace: dev
spec:
  replicas: 3
  selector:
    matchLabels:
      app: tomcat-pod
  template:
    metadata:
      labels:
        app: tomcat-pod
    spec:
      containers:
      - name: tomcat
        image: tomcat:8.5-jre10-slim
        ports:
        - containerPort: 8080

---

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  namespace: dev
spec:
  selector:
    app: nginx-pod
  clusterIP: None
  type: ClusterIP
  ports:
  - port: 80
    targetPort: 80

---

apiVersion: v1
kind: Service
metadata:
  name: tomcat-service
  namespace: dev
spec:
  selector:
    app: tomcat-pod
  clusterIP: None
  type: ClusterIP
  ports:
  - port: 8080
    targetPort: 8080

# establish
[root@master ~]# kubectl create -f tomcat-nginx.yaml

# see
[root@master ~]# kubectl get svc -n dev
NAME             TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
nginx-service    ClusterIP   None         <none>        80/TCP     48s
tomcat-service   ClusterIP   None         <none>        8080/TCP   48s

2.5.2 Http agent

Create ingress http yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-http
  namespace: dev
spec:
  rules:
  - host: nginx.itheima.com
    http:
      paths:
      - path: /
        backend:
          serviceName: nginx-service
          servicePort: 80
  - host: tomcat.itheima.com
    http:
      paths:
      - path: /
        backend:
          serviceName: tomcat-service
          servicePort: 8080

# establish
[root@master ~]# kubectl create -f ingress-http.yaml
ingress.extensions/ingress-http created

# see
[root@master ~]# kubectl get ing ingress-http -n dev
NAME           HOSTS                                  ADDRESS   PORTS   AGE
ingress-http   nginx.itheima.com,tomcat.itheima.com             80      22s

# View details
[root@master ~]# kubectl describe ing ingress-http  -n dev
...
Rules:
Host                Path  Backends
----                ----  --------
nginx.itheima.com   / nginx-service:80 (10.244.1.96:80,10.244.1.97:80,10.244.2.112:80)
tomcat.itheima.com  / tomcat-service:8080(10.244.1.94:8080,10.244.1.95:8080,10.244.2.111:8080)
...

# Next, configure the host file on the local computer and resolve the above two domain names to 192.168.109.100(master)
# Then, you can access Tomcat. Com separately itheima. COM: 32240 and nginx itheima. Com:32240 check the effect

2.5.3 Https agent

Create certificate

# Generate certificate
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/C=CN/ST=BJ/L=BJ/O=nginx/CN=itheima.com"

# Create key
kubectl create secret tls tls-secret --key tls.key --cert tls.crt

Create ingress HTTPS yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-https
  namespace: dev
spec:
  tls:
    - hosts:
      - nginx.itheima.com
      - tomcat.itheima.com
      secretName: tls-secret # Specify secret key
  rules:
  - host: nginx.itheima.com
    http:
      paths:
      - path: /
        backend:
          serviceName: nginx-service
          servicePort: 80
  - host: tomcat.itheima.com
    http:
      paths:
      - path: /
        backend:
          serviceName: tomcat-service
          servicePort: 8080
# establish
[root@master ~]# kubectl create -f ingress-https.yaml
ingress.extensions/ingress-https created

# see
[root@master ~]# kubectl get ing ingress-https -n dev
NAME            HOSTS                                  ADDRESS         PORTS     AGE
ingress-https   nginx.itheima.com,tomcat.itheima.com   10.104.184.38   80, 443   2m42s

# View details
[root@master ~]# kubectl describe ing ingress-https -n dev
...
TLS:
  tls-secret terminates nginx.itheima.com,tomcat.itheima.com
Rules:
Host              Path Backends
----              ---- --------
nginx.itheima.com  /  nginx-service:80 (10.244.1.97:80,10.244.1.98:80,10.244.2.119:80)
tomcat.itheima.com /  tomcat-service:8080(10.244.1.99:8080,10.244.2.117:8080,10.244.2.120:8080)
...

# The following can be accessed through the browser https://nginx.itheima.com:31335 And https://tomcat.itheima.com:31335 Here to check

Tags: Kubernetes

Posted by Crackhead on Sun, 17 Apr 2022 13:24:51 +0930