On the k8s system, Service provides a stable access gateway for service class applications in Pod. Services provided by Pod can be accessed through the ClusterIP(VIP) generated by Service, but how does the application in the Pod client know a specific What about the IP and port of the service resource? For example, if there are two applications, one is an API Application and the other is a DB application, both applications are managed by Deployment, and both of them provide services by exposing ports through Service. The API needs to connect to the DB application. It only knows the name of the DB application and the name of the service corresponding to the db, but it does not know its VIP address. It can access the subsequent PD service through ClusterIP. If you know the VIP The address Is it enough?
1. apiServer
The back-end Endpoints information for the corresponding service can be retrieved directly from the apiserver, so the easiest way is to query directly from the apiserver. If there is a special application occasionally, it is OK to query the Endpoints directly after the service through the apiserver, but if each application queries the dependent services at startup, this not only increases the complexity of the application. This also leads to our application relying on Kubernetes, which is too coupled to be universal.
2. Environment variables
To solve the above problem, in previous versions, Kubernetes used the method of environment variables, which set IP and port information for all services through environment variables when each Pod starts, so that applications in Pod can read environment variables to obtain service-dependent address information. This method is relatively simple to use. But there is a big problem that the dependent services must exist before the Pod starts, or they will not be injected into the environment variable. For example: first create an Nginx service: (test-nginx.yaml)
[root@k8s-master1 service]# vim test_nginx.yaml You have new mail in /var/spool/mail/root [root@k8s-master1 service]# cat test_nginx.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deploy labels: k8s-app: nginx-demo spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:latest imagePullPolicy: IfNotPresent ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx-service labels: name: nginx-service spec: ports: - port: 5000 targetPort: 80 selector: app: nginx [root@k8s-master1 service]# kubectl apply -f test_nginx.yaml deployment.apps/nginx-deploy created service/nginx-service created [root@k8s-master1 service]# kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE nginx-deploy 2/2 2 2 11s [root@k8s-master1 service]# kubectl get rs NAME DESIRED CURRENT READY AGE nginx-deploy-75b69bd684 2 2 2 15s [root@k8s-master1 service]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-deploy-75b69bd684-57ck4 1/1 Running 0 21s 10.244.36.110 k8s-node1 <none> <none> nginx-deploy-75b69bd684-dtntv 1/1 Running 0 21s 10.244.36.111 k8s-node1 <none> <none> [root@k8s-master1 service]# kubectl get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 43d <none> nginx-service ClusterIP 10.103.68.123 <none> 5000/TCP 37s app=nginx
You can see that two Pods and a Service named nginx-service were created successfully. The Service listens on port 5000 and it forwards traffic to all Pods it proxyes (here are the two Pods with the app: nginx tag)
Then create a normal Pod and observe whether the environment variables in the Pod contain the service information of the above nginx-service: (test-pod.yaml)
[root@k8s-master1 service]# vim test_pod.yaml You have new mail in /var/spool/mail/root [root@k8s-master1 service]# cat test_pod.yaml apiVersion: v1 kind: Pod metadata: name: test-pod spec: containers: - name: test-service-pod image: busybox:1.28 imagePullPolicy: IfNotPresent command: ["/bin/sh", "-c", "env"] [root@k8s-master1 service]# kubectl apply -f test_pod.yaml pod/test-pod created
After the Pod is created, view the pod log information
[root@k8s-master1 service]# kubectl logs test-pod KUBERNETES_SERVICE_PORT=443 KUBERNETES_PORT=tcp://10.96.0.1:443 HOSTNAME=test-pod SHLVL=1 HOME=/root NGINX_SERVICE_PORT_5000_TCP_ADDR=10.103.68.123 NGINX_SERVICE_PORT_5000_TCP_PORT=5000 NGINX_SERVICE_PORT_5000_TCP_PROTO=tcp KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin NGINX_SERVICE_SERVICE_HOST=10.103.68.123 KUBERNETES_PORT_443_TCP_PORT=443 NGINX_SERVICE_PORT_5000_TCP=tcp://10.103.68.123:5000 KUBERNETES_PORT_443_TCP_PROTO=tcp NGINX_SERVICE_SERVICE_PORT=5000 NGINX_SERVICE_PORT=tcp://10.103.68.123:5000 KUBERNETES_SERVICE_PORT_HTTPS=443 KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443 KUBERNETES_SERVICE_HOST=10.96.0.1 PWD=/
You can see that a number of environment variable processes have been printed, including the nginx-service service service you just created, which has HOST,PORT,PROTO,ADDR, etc. Now if you need to access the service of nginx-service in this Pod, you can access it directly through NGINX_SERVICE_SERVICE_HOST and NGINX_SERVICE_SERVICE_PORT accessed, but if the nginx-service service is not started when the Pod starts, this information cannot be obtained in the environment variables. Of course, methods such as initContainer can be used to ensure that the nginx-service starts before the Pod starts, but after all, this method increases the complexity of Pod startup, so it is not the optimal method.
Therefore, the function of service discovery based on environment variables is relatively simple and easy to use, but there are certain limitations. For example, only information about pre-existing Service objects in the same namespace as the created pod object will be injected as environment variables, those in a different namespace, or services created after the pod resource is created Object-related environment variables are not added.
3. ClusterDNS
Due to the limitations of the discovery mechanism of environment variables above, a smarter scheme is needed to use the name of the Service directly, because the name of the Service will not change and you do not need to care about the assigned ClusterIP address, because the ClusterIP address is not fixed, so it would be good if the name of the Service was used directly and the corresponding ClusterIP address conversion could be completed automatically. The direct conversion of name and IP can be solved by DNS. Similarly, Kubernetes also provides a DNS solution to solve the above problem of service discovery.
ClusterDNS, which is used for name resolution and service discovery on the k8s system, is one of the core accessories of the cluster. Each Service object created in the cluster will automatically generate related resource records. By default, each pod resource in the cluster is automatically configured as a name resolution server and includes the domain name suffix of the namespace to which it belongs in its DNS search list.
Since kubernetes version 1.3, its DNS for service discovery has been updated to kubeDNS, and another similar service discovery project based on newer DNS is CoreDNS incubated by CNCF, which is developed based on go language and implemented by concatenating a group of A plugin chain for DNS-enabled plugins to work. Since kubernetes 1.11, CoreDNS has replaced kubeDNS as the default DNS plugin. It is usually an add-on that is deployed immediately after the cluster installation is complete. When you initialize a cluster with kubeadm, it automatically deploys.
CoreDNS official website address: https://coredns.io/
1) Domain name format:
If the established Service supports resolution in the form of domain names, it can solve the function of service discovery. So what kind of DNS records can the Service be generated by using CoreDNS?
Common Service: generates servicename.namespace.svc.cluster. The local domain name, which resolves to the ClusterIP corresponding to the Service, can be abbreviated as servicename for calls between Pod s. Namespace, if under the same namespace, can even be accessed as a servicename
Headless Service: A headless service, which means that the clusterIP is set to None, will be parsed as the IP list of the specified Pod, and a specific Pod can also be accessed through podname.servicename.namespace.svc.cluster.local.
2) Test CoreDNS
Use a Pod to test the domain name access of the Service
[root@k8s-master1 service]# vim dig-pod.yaml You have new mail in /var/spool/mail/root [root@k8s-master1 service]# cat dig-pod.yaml apiVersion: v1 kind: Pod metadata: name: dig namespace: default spec: containers: - name: dig image: test/dig:latest imagePullPolicy: IfNotPresent command: - sleep - "3600" restartPolicy: Always [root@k8s-master1 service]# kubectl apply -f dig-pod.yaml pod/dig created
In the created pod:dig client, try to request to resolve the related DNS records of nginx-service and kubernetes
[root@k8s-master1 service]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 44d nginx-service ClusterIP 10.103.68.123 <none> 5000/TCP 22h [root@k8s-master1 service]# kubectl exec -it dig -- nslookup nginx-service Server: 10.96.0.10 Address: 10.96.0.10#53 Name: nginx-service.default.svc.cluster.local Address: 10.103.68.123 [root@k8s-master1 service]# kubectl exec -it dig -- nslookup kubernetes Server: 10.96.0.10 Address: 10.96.0.10#53 Name: kubernetes.default.svc.cluster.local Address: 10.96.0.1
During parsing, the search order of the "nginx-service" service name is default.svc.cluster.local, svc.cluster.local, cluster.local, and ${WORK_NODE_DOMAIN}, so DNS-based service discovery is not subject to Service resources. Namespace and creation time constraints. The above parsing result is also the IP address of the nginx-service service created in the default default namespace.
After the service is created in k8s, the default FQDN of the service is <service name>.<namespace>.svc.cluster.local, then the services inside the k8s cluster can be accessed through the FQDN.