k8s network communication

k8s network communication

(1) k8s connect other plug-ins through CNI interface to realize network communication. Currently, the popular plug-ins include flannel, calico, etc
CNI plug-in storage location: # cat / etc / CNI / net d/10-flannel. conflist
(2) The solutions used by the plug-in are as follows:
Virtual bridge, virtual network card, multiple containers share a virtual network card for communication.
Multiplexing: MacVLAN, where multiple containers share a physical network card for communication.
Hardware exchange: SR-LOV, a physical network card can virtualize multiple interfaces, which has the best performance.
(3) Network communication of each node
1. Inter container communication: the communication between multiple containers in the same pod can be realized through lo;
2. Communication between pods:
(1) The pod of the same node forwards data packets through cni bridge
(2) The communication between pod s of different nodes needs the support of network plug-ins
3. Communication between pod and service: realize communication through iptables or ipvs. ipvs cannot replace iptables, because ipvs can only do load balancing, not nat conversion.
4.pod and extranet communication: MASQUERADE of iptables.
5. Communication between service and external clients of the cluster; (ingress,nodeport,loadbalancer)

(4) Service communication with external clients of the cluster
Kubectl get services Kube DNS -- namespace = Kube system #kubernetes provides a DNS plug-in Service
1. Cluster internal access only
(1).ClusterIP mode (default mode)

Follow the above established
kubectl describe svc myservice1
dig myservice1.default.svc.cluster.local. @10.96.0.10



(2).Headless Service "headless service"
Headless Service does not need to assign a VIP, but directly resolves the IP address of the proxy Pod in the form of DNS records

Domain name format: $(servicename).$(namespace).svc.cluster.local=myservice.default.svc.cluster.local.
vim demo.yml
---
apiVersion: v1
kind: Service
metadata:
  name: myservice
spec:
  selector:
    app: myapp
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  #externalIPs:  #2 (3) below
  #- 172.25.2.100
  clusterIP: None
  #type: NodePort  #2 (1) below
  #type: LoadBalancer  #2 (2) below
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: myapp:v1

kubectl apply -f demo.yml
kubectl get svc
kubectl run test --image=busyboxplus -it
# Pod can still be parsed after rolling update



2. Access Service from outside
(1).NodePort mode


(2)LoadBalancer
It is applicable to the Kubernetes service on the public cloud. After the service is submitted, Kubernetes will call CloudProvider to create a load balancing service for you on the public cloud, and configure the IP address of the proxy Pod to the load balancing service as the back end

(3)service allows it to be assigned a public IP

(4)ExternalName

vim ex-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  type:  ExternalName
  externalName: www.baidu.com
  
kubectl apply -f ex-service.yaml
kubectl get svc

kubectl attach test -it  #Two test methods
dig -t A my-service.default.svc.cluster.local. @10.96.0.10



(5) Principle of cross host communication in Flannel vxlan mode

1.flannel network
VXLAN, Virtual Extensible LAN (Virtual Extensible LAN), is a network virtualization technology supported by Linux itself. VXLAN can completely realize encapsulation and de encapsulation in the kernel state, so as to build an Overlay Network through the "tunnel" mechanism.
VTEP: VXLAN Tunnel End Point. In Flannel, the default value of VNI is 1, which is why the VTEP devices of the host are called Flannel 1. The reason for the failure.
Cni0: bridge device. Each time a pod is created, a pair of Veth pairs will be created. One end is eth0 in pod and the other end is the port (network card) in cni0 bridge.
Flannel.1: TUN device (virtual network card) is used to process vxlan messages (packet and unpacking). The pod data traffic between different node s is sent from the overlay device to the opposite end in the form of tunnel.
Flanneld: flanneld runs flanneld in each host as an agent. It will obtain a small network segment subnet from the network address space of the cluster for the host, from which the IP addresses of all containers in the host will be allocated. At the same time, flanneld monitors the K8s cluster database for flannel 1. The device provides necessary mac, IP and other network data information when encapsulating data.
2. Principle of flannel network
(1) When the container sends an IP packet, it is sent to the cni bridge through veth pair, and then routed to the local flannel 1. Process the equipment.
(2)VTEP devices communicate through layer-2 data frames. After receiving the original IP packet, the source VTEP device adds a destination MAC address to it, encapsulates it into an internal data frame and sends it to the destination VTEP device.
(3) The internal data frame cannot be transmitted on the two-layer network of the host. The Linux kernel needs to further encapsulate it into an ordinary data frame of the host, carrying the internal data frame and transmitting it through the eth0 of the host.
(4)Linux will add a VXLAN header in front of the internal data frame. There is an important flag called VNI in the VXLAN header, which is an important flag for VTEP to identify whether a data frame should be handled by itself.
(5)flannel.1. The device only knows the flannel at the other end 1 the MAC address of the device, but I don't know what the corresponding host address is. In the linux kernel, the forwarding basis of network devices comes from the forwarding database of FDB, the flannel The FDB information corresponding to 1 bridge is maintained by the flanneld process.
(6) The Linux kernel adds a layer-2 data frame header in front of the IP packet, fills in the MAC address of the target node, and the MAC address is obtained from the ARP table of the host.
(7) Flannel 1 device can send the data frame from eth0, and then through the host network to the eth0 device of the target node. The target host kernel network stack will find that this data frame has VXLAN Header, and VNI is 1. The Linux kernel will unpack it, get the internal data frame, and give it to the local flannel according to the VNI value 1 equipment processing, flannel 1 unpack and send it to the cni bridge according to the routing table, and finally reach the target container



server3 src:10.244.1.67 -> server4 dst:10.244.2.67

a. The YML file follows the above configuration. The service uses the default ClusterIP. On server3, you must first know the IP and mac address of the target host (server4)eth0



b. There are IP and mac addresses of eth0 of server4 on server3


c. Grab the package on server4 and view it


3.flannel supports multiple back ends:
(1)Vxlan
vxlan # message encapsulation, default
For direct routing # direct routing, vxlan is used across network segments, and host GW mode is used for the same network segment.
(2) The host GW # host gateway has good performance, but it can only be used in the layer-2 network and does not support cross network. If there are thousands of pods, it is easy to produce broadcast storm, which is not recommended
(3)UDP # performance is poor, not recommended

# host-gw
kubectl -n kube-system edit cm kube-flannel-cfg
kubectl get pod -n kube-system |grep kube-flannel| awk '{system("kubectl delete pod "$1" -n kube-system")}'



# vxlan
# Directrouting
kubectl -n kube-system edit cm kube-flannel-cfg
kubectl get pod -n kube-system |grep kube-flannel| awk '{system("kubectl delete pod "$1" -n kube-system")}'





(6) Ingress service
A global load balancing Service set to proxy different backend services is the Ingress Service in Kubernetes.
Ingress consists of two parts: Ingress controller and ingress service.
The Ingress Controller will define the corresponding capabilities of your proxy object. Various reverse proxy projects commonly used in the industry, such as Nginx, HAProxy, Envoy, traifik, etc., have specially maintained the corresponding Ingress Controller for Kubernetes

In essence, ingress creates ingress nginx controller 456pj (POD) in the ingress nginx namespace, and controls ingress nginx controller 456pj through the ingress service (yaml) file. In fact, it modifies the nginx configuration file in the pod, and can also create multiple virtual hosts to schedule the back-end services through the nginx service

NGINX Ingress Controller
1. Deployment

# Download the image and upload it to the local warehouse. The image download needs to climb over the wall
server1
docker load -i ingress-nginx.tar
docker tag quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.33.0 reg.westos.org/library/nginx-ingress-controller:0.33.0
docker tag jettech/kube-webhook-certgen:v1.2.0 reg.westos.org/library/kube-webhook-certgen:v1.2.0
docker push reg.westos.org/library/nginx-ingress-controller:0.33.0
docker push reg.westos.org/library/kube-webhook-certgen:v1.2.0

server2
mkdir /root/ingress
cd /root/ingress
# Ingress controlle
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml # download the reference file and modify it

kubectl apply -f deploy.yaml
kubectl get ns
kubectl -n ingress-nginx get pod



# Ingress service
cd /root/ingress
vim nginx.yml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress-demo
spec:
  rules:
  - host: www1.westos.org
    http:
      paths:
      - path: /
        backend:
          serviceName: myservice
          servicePort: 80

kubectl apply -f nginx.yml




Use DaemonSet and nodeselector to deploy the ingress controller to a specific node. The ingress controller is on server4. Then use the host network to connect the pod with the network of the host node directly, and directly use the 80 / 443 port of the host to access the service.
The advantage is that the whole request link is the simplest and the performance is better than that of NodePort mode.
The disadvantage is that one node can only deploy one ingress controller pod due to the direct use of the network and port of the host node.
It is more suitable for large concurrent production environment

2. Progress TLS configuration

openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"  #/CN/O is written as the domain name defined by itself. Otherwise, the default encryption authentication will be used
kubectl create secret tls tls-secret --key tls.key --cert tls.crt
kubectl get secrets 

vim tls.yml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: nginx-test
spec:
  tls:
    - hosts:
      - www1.westos.org
      secretName: tls-secret
  rules:
    - host: www1.westos.org
      http:
        paths:
        - path: /
          backend:
            serviceName: myservice
            servicePort: 80

kubectl apply -f tls.yml
kubectl describe ingress nginx-test
kubectl get ingress



3. Authentication

yum install httpd-tools -y
htpasswd -c auth wxh
kubectl create secret generic basic-auth --from-file=auth

vim tls.yml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: nginx-test
  annotations:
    nginx.ingress.kubernetes.io/auth-type: basic
    nginx.ingress.kubernetes.io/auth-secret: basic-auth
spec:
  tls:
    - hosts:
      - www1.westos.org
      secretName: tls-secret
  rules:
    - host: www1.westos.org
      http:
        paths:
        - path: /
          backend:
            serviceName: myservice
            servicePort: 80

kubectl apply -f tls.yml


4.Ingress address rewriting

 vim tls.yml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: nginx-test
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
  rules:
    - host: www1.westos.org
      http:
        paths:
        - backend:
            serviceName: myservice
            servicePort: 80
          path: /westos(/|$)(.*)

kubectl apply -f tls.yml
curl www1.westos.org == curl www1.westos.org/westos


Tags: Kubernetes

Posted by PHPslide on Fri, 15 Apr 2022 13:09:59 +0930