Build k8s test environment from scratch

By Hu Tao (@ Daniel Hutao) | 18.04.2022

Quickly build local Kubernetes test links from scratch

Start from scratch? Start from scratch!

var zero = 99 / / take it easy, Just a jok.

Let's Get Started!

summary

"Kubernetes cluster deployment" is strictly a complex technical activity, with many options. To deliver a set of highly available clusters close to "best practices", there are many technical details to consider. How to deploy a "truly highly available kubernetes cluster" is beyond the scope of this article, so our goal today is to quickly deploy an available kubernetes environment in a simple way. This environment is mainly used to meet the development and testing requirements of DevStream.

There are several options for rapid deployment of Kubernetes, such as minikube and kind. Minikube was first implemented based on Virtualization (the new version also supports containerization), that is to create several virtual machines locally through tools such as virtualbox or kvm, and then run the Kubernetes cluster in the virtual machines. One node corresponds to one virtual machine. Kind is implemented through containerization, that is, several containers are started locally through Docker. Each container acts as a node of Kubernetes and then runs containerized applications in the container. In this paper, we choose to use kind, a "container in the container" way to build Kubernetes environment. Of course, if you have other good tools, you can also use them. Our purpose is only to quickly deploy a set of available Kubernetes cluster environment.

This paper takes macOS as the development environment, and students who use Linux or Windows system as the development environment can refer to the methods in this paper and make some flexible adjustments accordingly.

Installation of Docker

Installing Docker under Linux is a very simple thing. The core principle of Docker is based on Linux Namespace, Cgroup and other mechanisms. However, under macOS and Windows, Docker needs to be used indirectly through virtualization technology. Of course, we don't need to install virtualization software first, then install Linux virtual machine ourselves, and then use Docker. We can go directly to Docker Com download Docker Desktop to run the Docker program.

We are https://www.docker.com/produc... Looking for a suitable Docker Desktop version is mainly to see whether the cpu architecture is Intel Chip or Apple Chip. The former corresponds to the amd64 architecture version of Mac and the latter is the M1 chip version of arm architecture. The download page is roughly as follows:

After downloading, double-click docker DMG file, you can see the installation page:

We drag the Docker icon into Applications, wait less than half a minute, and then we can see the Docker icon in the "startup console", and then click Docker in the "startup console" to open the Docker Desktop:

After a few seconds, you can see the startup page:

We can click on the "gear" in the upper right corner âš™ī¸” Button to modify some configurations of Docker Desktop, such as adjusting the resources that Docker can use. If we need to start more containers later and the Memory is not enough, we can go back here to adjust it. For example, I increase the Memory to 4.00 GB:

After modification, remember to click "apply & restart" in the lower right corner to take effect.

Kind introduction

Kind (Kubernetes in Docker) is a tool for deploying Kubernetes cluster environment using Docker container as "node". Kind tool is mainly used to test Kubernetes. At present, in many projects that need to be deployed to Kubernetes environment for testing, kind will be used to quickly pull up a Kubernetes environment in the ci process, and then run relevant test cases.

Kind itself is very simple. It only contains a simple command-line tool "kind" and a Docker image used to start Kubernetes, systemd, etc. We can understand the principle of kind in this way: it pulls up a container through Docker on the Host using the container image encapsulating Kubernetes and other tools. systemd runs in this container. systemd in the container can further run the basic processes required by Kubernetes nodes such as Docker and Kubelet, and then these processes can further run Kube apiserver, Kube controller manager, Kube scheduler Kube proxy, CoreDNS and other components required by the cluster. Therefore, such a container forms the "node" of a Kubernetes cluster.

Therefore, Kind can run a "single node Kubernetes cluster" through one container, or further run a "multi node Kubernetes cluster" on one host by running three or more containers.

One click build Kind test link

Secret script:

  1. Download DevStream project main library: https://github.com/devstream-...
  2. Execute a command in the devstream Directory: make E2E up

It's over? It's over.

What happened? You can open Makefile, where you can see that E2E up actually executes sh hack / E2E / e2e-up SH this command, in e2e-up SH in this script, we have completed the construction of Kubernetes test environment based on Kind.

It seems that so far, you can AFK and have a cup of coffee!

But it didn't seem to have fun.

Well, let's talk about how Kind plays in detail.

Don't wear your seat belt today. I'm going to be serious.

Use Kind "two keys" to build Kubernetes environment

Now let's build a Kind development environment. You can see the latest Release version of Kind and the corresponding Node image on GitHub: https://github.com/kubernetes...

You can choose the compiled version or download and compile Kind directly through the go get command. We try to choose a newer version, and then download and install it through the following command (remember to change it to the version number you need).

# Method 1: select the compiled executable file
cd /tmp
curl -Lo ./kind https://github.com/kubernetes-sigs/kind/releases/download/v0.12.0/kind-darwin-arm64
chmod +x ./kind
sudo mv kind /usr/local/bin/

# Method 2: Download and compile through go get
go get sigs.k8s.io/kind@v0.12.1

You can download the required image in advance. Here we choose to use the corresponding image of Kubernetes version 1.22:

kindest/node:v1.22.0@sha256:b8bda84bb3a190e6e028b1760d277454a72267a5454b57db34437c34a588d047

Then you can quickly pull up a set of Kubernetes environment through the following command:

kind create cluster --image=kindest/node:v1.22.0 --name=dev

The above execution output is roughly as follows:

Creating cluster "dev" ...
 ✓ Ensuring node image (kindest/node:v1.22.0) đŸ–ŧ
 ✓ Preparing nodes đŸ“Ļ
 ✓ Writing configuration 📜
 ✓ Starting control-plane đŸ•šī¸
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
Set kubectl context to "kind-dev"
You can now use your cluster with:

kubectl cluster-info --context kind-dev

Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂

According to the prompt output from the command line, we need to execute kubectl cluster info – context kind dev to switch the context. In fact, we can see the new Kubernetes environment by directly executing kubectl get. This is only necessary when there are multiple clusters.

$ kubectl get node
NAME                STATUS   ROLES                  AGE    VERSION
dev-control-plane   Ready    control-plane,master   7m4s   v1.22.0

$ kubectl get pod -n kube-system
NAME                                     READY    STATUS     RESTARTS      AGE
coredns-78fcd69978-hch75                  1/1     Running       0          10m
coredns-78fcd69978-ztqn4                  1/1     Running       0          10m
etcd-dev-control-plane                    1/1     Running       0          10m
kindnet-l8qxq                             1/1     Running       0          10m
kube-apiserver-dev-control-plane          1/1     Running       0          10m
kube-controller-manager-dev-control-plane 1/1     Running       0          10m
kube-proxy-mzfgc                          1/1     Running       0          10m
kube-scheduler-dev-control-plane          1/1     Running       0          10m

In this way, we can quickly gain an environment that can be used to test or learn Kubernetes.

Build a multi node Kubernetes cluster environment with Kind "three keys"

The smallest Kubernetes HA cluster needs three Master nodes. Of course, we can also call the all in one environment with one node as a "single node cluster". In this section, let's take a look at how to quickly build a multi node Kubernetes cluster environment through Kind.

Kind cluster profile

When building the Kind environment, you can customize the configuration, and specify the path of the customized configuration file through – config. The configuration formats supported by Kind are as follows:

# this config file contains all config fields with comments
# NOTE: this is not a particularly useful config file
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
# patch the generated kubeadm config with some extra settings
kubeadmConfigPatches:
- |
  apiVersion: kubelet.config.k8s.io/v1beta1
  kind: KubeletConfiguration
  evictionHard:
    nodefs.available: "0%"  
# patch it further using a JSON 6902 patch
kubeadmConfigPatchesJSON6902:
- group: kubeadm.k8s.io
  version: v1beta2
  kind: ClusterConfiguration
  patch: |
    - op: add
      path: /apiServer/certSANs/-
      value: my-hostname    
# 1 control plane node and 3 workers
nodes:
# the control plane node config
- role: control-plane
# the three workers
- role: worker
- role: worker
- role: worker

You can see that the configuration items here are divided into two parts. The above is the configuration item related to how kubedm configures Kubernetes, and the following is the configuration item related to the role and scale of nodes. It is not difficult to guess that we need to deploy Kubernetes clusters with multiple nodes, which can be realized by specifying the partial configuration of nodes.

One master and three slave clusters

We prepare a corresponding configuration file named multi node config Yaml, as follows:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker

Then execute the following command to pull up the cluster:

$ kind create cluster --config multi-node-config.yaml \
 --image=kindest/node:v1.22.0 --name=dev4

After the command is executed, we can see the output results similar to those seen in the previous single node environment. The main difference is that there is an additional "Joining worker nodes" in the steps:

Creating cluster "dev4" ...
 ✓ Ensuring node image (kindest/node:v1.22.0) đŸ–ŧ
 ✓ Preparing nodes đŸ“Ļ đŸ“Ļ đŸ“Ļ đŸ“Ļ
 ✓ Writing configuration 📜
 ✓ Starting control-plane đŸ•šī¸
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
 ✓ Joining worker nodes 🚜
Set kubectl context to "kind-dev4"
You can now use your cluster with:

kubectl cluster-info --context kind-dev4

Thanks for using kind! 😊

You can view the newly created cluster through the following command:

$ kubectl cluster-info --context kind-dev4
Kubernetes control plane is running at https://127.0.0.1:51851
CoreDNS is running at https://127.0.0.1:51851/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
$ kubectl get node
NAME                 STATUS   ROLES                  AGE     VERSION
dev4-control-plane Ready   control-plane,master 3m28s   v1.22.0
dev4-worker          Ready    <none>                 2m54s   v1.22.0
dev4-worker2         Ready    <none>                 2m54s   v1.22.0
dev4-worker3         Ready    <none>                 2m54s   v1.22.0

It can be clearly seen from the above command execution results that the dev4 cluster has one Master node and three Worker nodes.

Three master and three slave HA clusters

Of course, the HA here only means that the Master node component will run 3 copies. To a certain extent, there is no single point of failure in the Master node, which is not "high availability" in the strict sense.

Also prepare a configuration file ha config Yaml, as follows:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: control-plane
- role: control-plane
- role: worker
- role: worker
- role: worker

Then execute the following command to pull up the cluster:

$ kind create cluster --config ha-config.yaml \
 --image=kindest/node:v1.22.0 --name=dev6

After the command is executed, we can still see the familiar log output results, which are slightly different from the above. Here are mainly "Configuring the external load balancer" and "joining more control plane nodes":

Creating cluster "dev6" ...
 ✓ Ensuring node image (kindest/node:v1.22.0) đŸ–ŧ
 ✓ Preparing nodes đŸ“Ļ đŸ“Ļ đŸ“Ļ đŸ“Ļ đŸ“Ļ đŸ“Ļ
 ✓ Configuring the external load balancer âš–ī¸
 ✓ Writing configuration 📜
 ✓ Starting control-plane đŸ•šī¸
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
 ✓ Joining more control-plane nodes 🎮
 ✓ Joining worker nodes 🚜
Set kubectl context to "kind-dev6"
You can now use your cluster with:

kubectl cluster-info --context kind-dev6

Have a nice day! 👋

Here you can also see some interesting details. For example, after the "Preparing nodes" step, the number of small boxes and the number of nodes are equal; In addition, the last greeting is not fixed. For example, the front is "Thanks for using kind! 😊”īŧŒ Here it becomes "Have a nice day! 👋”īŧŒ It can be seen that the developers behind Kind are a group of "lovely" and "interesting" engineers!

Similarly, let's take a look at the cluster just created through several commands:

$ kubectl cluster-info --context kind-dev6
Kubernetes control plane is running at https://127.0.0.1:52937
CoreDNS is running at https://127.0.0.1:52937/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
$ kubectl get node
NAME                  STATUS   ROLES                  AGE     VERSION
dev6-control-plane    Ready    control-plane,master   8m19s   v1.22.0
dev6-control-plane2   Ready    control-plane,master   7m46s   v1.22.0
dev6-control-plane3   Ready    control-plane,master   7m20s   v1.22.0
dev6-worker           Ready    <none>                 7m      v1.22.0
dev6-worker2          Ready    <none>                 7m      v1.22.0
dev6-worker3          Ready    <none>                 7m      v1.22.0

It can be clearly seen from the above command execution results that the dev6 cluster has three Master nodes and three Worker nodes.

Here we have mastered how to easily build a multi node Kubernetes cluster environment through Kind. Later, you can choose the node size and role according to your own needs and build a suitable test environment.

Advanced use of Kind

Through the previous sections, we have learned how to build various types of clusters with Kind. However, to make good use of these clusters, we still need to master some operation and maintenance skills. In this section, we will learn some advanced operations of Kind cluster.

Port mapping

Imagine a scenario: we run an Nginx container service in the Kind cluster and listen to the exposure of port 80. At this time, can we access port 80 of the machine where the Kind cluster is located on another machine and then access the Nginx service? In fact, these two 80 ports are obviously not in the same network namespace. We can configure port mapping in the following ways to solve such problems.

Add the extraPortMappings configuration item in the configuration file:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
    listenAddress: "0.0.0.0"
    protocol: tcp

In this way, the 80 port exposed by NodePort or the Pod exposed by hostNetwork in the Kubernetes cluster can be accessed through the 80 port of the host.

Expose Kube apiserver

Sometimes we use Kind to build a Kubernetes environment on one machine and write code on another machine. At this time, we will find that we can't connect to Kube apiserver in Kind cluster to debug the program. In fact, it is because Kube apiserver monitors 127.0.0.1 plus random ports under the default configuration. To access from the outside, we need to change the network card monitored by Kube apiserver to a non lo external network card, such as eth0.

Similarly, we can customize the configuration file and add networking Apiserveraddress configuration item, the value is the local network card ip, which can be modified according to the actual situation:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
apiServerAddress: "192.168.39.1"

Enable Feature Gates

If we want to use some alpha phase features, we need to configure Feature Gates to achieve it. When using kubedm to build an environment, this requirement can be realized by configuring ClusterConfiguration. After kubedm is encapsulated by Kind, how can we enable Feature Gates in Kind?

The scheme is as follows. FeatureGateName is the name of the Feature Gate we need to enable:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
featureGates:
FeatureGateName: true

Import image

The environment built through kind is essentially running in a container. The image on the host computer cannot be recognized by kind environment by default. At this time, we can import the image in the following ways:

# If the required image is my image: v1
kind load docker-image my-image:v1 --name dev
# Suppose the required image is a tar package, my image tar
kind load image-archive my-image.tar --name dev

After knowing this method, we need to build a new image and run it in the Kind environment, which can be realized through the following steps:

docker build -t my-image:v1 ./my-image-dir
kind load docker-image my-image:v1
kubectl apply -f my-image.yaml

How can I view the images in the current Kind environment? It can also be simple:

docker exec -it dev-control-plane crictl images

Dev control plane is the name of the node container. When there are multiple sets of environments, the name needs to be switched flexibly. In addition, you can view other commands supported by crictl through crictl -h, such as crictl RMI < image_ Name > can be used to delete images, etc.

Summary

There's nothing to summarize. In short, I hope you can have a smooth ride!

I have to AFK and have a cup of coffee NOW!

This paper is based on the operation tool platform of blog group sending one article and multiple sending OpenWrite release

Tags: git

Posted by PetrZagvazdin on Tue, 19 Apr 2022 10:53:18 +0930