k8s-k8s deployment (cloud server can refer to)

1. Machine preparation before k8s deployment

0. Machine preparation

The machine configuration related to the production test environment can be searched and viewed by yourself. This deployment uses two Tencent lightweight cloud servers; (because the lightweight cloud server has also stepped on a lot of pits, if you have the same friends, you can focus on the master in Chapter 2 k8s deployment node installation section)

1. Upgrade the Linux kernel to the latest version

CentOS7 , the default kernel version is 3.10.0;

#Import the public key of the ELRepo repository
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
#Install ELRepo repository for yum
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
#View available versions
yum --disablerepo="*" --enablerepo="elrepo-kernel" list available
#Install the latest kernel
yum --enablerepo=elrepo-kernel install kernel-ml
# View currently available kernels
sudo awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg
# Select the appropriate version
grub2-set-default ${serial number}
PS: grub2-set-default 1
# The upgrade is complete and can be verified by the uname -r command.
uname -r

2. Set the time to synchronize with the network

centos7:

# 1 Check the system time
timedatectl 
# 2 Set the time zone to Beijing time
timedatectl set-timezone Asia/Shanghai
# 3. Set the time synchronization network
 All commands in this section are in root run under the user.
# 3.1 Install ntp service
# If already installed, this command will automatically have no effect.
yum install -y ntp
# 3.2 Modify ntp related parameters
vi /etc/sysconfig/ntpd
 Modify it to:
SYNC_HWCLOCK=yes
OPTIONS="-g -x"
# 3.3 Start the ntp service
 What is actually called here is the restart command, which can be ignored ntp The current state of the service.
systemctl restart ntpd
# 3.4 Set the ntp service to start at boot
systemctl enable ntpd
# 3.5 Set the Linux system clock to synchronize with the remote NTP server
timedatectl set-ntp true

centos8:

#1.chrony installation
yum install -y chrony
#2. Start the chrony service
systemctl start chronyd
#3. Set the system to automatically start the chrony service
systemctl enable chronyd 
#4. Check if the system time has been synchronized
date

4, hosts settings

    vim /etc/hosts
43.143.65.53 k8s-master
1.116.132.201 k8s-node01

5. Set the firewall to Iptables and set empty rules

# turn off firewall
systemctl stop firewalld
# Turn off the firewall and start up
systemctl disable firewalld
# empty rule
yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables sav
# Since the communication between k8s internal nodes uses the intranet ip, we need to redirect the intranet ip to the public network ip (the master and node nodes execute the corresponding commands respectively)
iptables -t nat -A OUTPUT -d 10.0.4.5 -j DNAT --to-destination 43.143.65.53
iptables -t nat -A OUTPUT -d 10.0.4.3 -j DNAT --to-destination 1.116.132.201

6. Close selinux

Because when K8s is installed, it will detect whether the swap partition is closed. If it is enabled, it may put the pod in virtual memory to run, which greatly reduces work efficiency. (can also be excluded by --ingress)

# Temporarily closed, but it will open after restarting the system
setenforce 0
# Permanent shutdown Enter the command vi /etc/selinux/config, change SELINUX=enforcing to SELINUX=disabled, then save and exit.
vi /etc/selinux/config

7. Close swap

# temporary
swapoff -a 
# permanent
sed -ri 's/.*swap.*/#&/' /etc/fstab 

8. Turn off unnecessary services in the system

systemctl stop postfix && systemctl disable postfix

9. Port open (non-cloud server does not need to be executed)

Since the two machines are in the public network environment and the k8s nodes need to communicate with each other, some ports need to be opened. The port configuration can go directly to the Tencent Cloud console for configuration.

The following is the port configuration of the master node required by the official website

The following is the port configuration of the node node required by the official website

10. docker installation

yum install -y yum-utils
# Add yum source, here is the yum source of Alibaba Cloud
yum-config-manager  --add-repo  http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# View docker version
yum list docker-ce --showduplicates | sort -r
# You can choose other versions
yum  -y install docker-ce-20.10.12-3.el7
# To set up domestic image acceleration, you can also use your own warehouse image. Here is the Alibaba Cloud personal acceleration image I applied for.
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://ui3fq00k.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
# Import mirror settings
systemctl daemon-reload
# start docker
systemctl restart docker
# Set up docker startup
systemctl enable docker

11. Bridge IPV4 traffic to the chain of iptables (both master and node01 nodes are required)

tee /etc/sysctl.d/k8s.conf <<-'EOF'
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

# Configuration takes effect
sysctl -p /etc/sysctl.d/k8s.conf

12. Configure the k8s yum source (both master and node01 nodes are required)

tee /etc/yum.repos.d/kubernetes.repo <<-'EOF'
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

13. Install kubeadm (initialize cluster), kubelet (start pod) and kubectl(k8s command tool) (both master and node01 nodes are required)

yum install -y kubelet-1.20.5
yum install -y kubeadm-1.20.5
yum install -y kubectl-1.20.5

14. Pull image and rename

images.sh

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.5  k8s.gcr.io/kube-proxy:v1.20.5
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.20.5 k8s.gcr.io/kube-scheduler:v1.20.5
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.20.5 k8s.gcr.io/kube-apiserver:v1.20.5
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.20.5 k8s.gcr.io/kube-controller-manager:v1.20.5
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0  k8s.gcr.io/etcd:3.4.13-0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2  k8s.gcr.io/pause:3.2


docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.20.5
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.20.5
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.20.5
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.5
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0

run script

bash images.sh

You can see that the mirror has been downloaded to Ancheng, and the modified tag is k8s.gcr.io/*

Second, k8s deployment

1. master node installation

1. Initialization

kubeadm init \
  --apiserver-advertise-address=43.143.65.53 \    # Here 43.143.65.53 is the public network ip
  --image-repository registry.aliyuncs.com/google_containers \
  --kubernetes-version v1.20.5 \
  --service-cidr=10.96.0.0/12 \
  --pod-network-cidr=10.244.0.0/16 \
  --ignore-preflight-errors=all \
  --v=6 

This place must be wrong

etcd.yaml is required when the following situations occur

After entering the above command, kubeadm starts the initialization of the master node, but because the etcd configuration file is incorrect, etcd cannot be started, and the file needs to be modified.
File path "/etc/kubernetes/manifests/etcd.yaml":

Here "43.143.65.53" is the Tencent Cloud public network ip, and the ones to pay attention to are "--listen-client-urls" and "--listen-peer-urls". You need to delete the public IP behind --listen-client-urls and change --listen-peer-urls to 127.0.0.1:2380

The reason is that as long as Tencent Cloud chooses a VPC network, it uses NAT to map the public network IP to the private network card. Interested students can learn about NAT. This is why many colleagues cannot install k8s clusters on Tencent Cloud or Alibaba Cloud

The following information pops up to indicate that the master node is successfully installed (note the red box, which is required for node installation)

# Just follow the prompts
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

2. Deploy a container network

curl https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml >>kube-flannel.yml
chmod 777 kube-flannel.yml 
kubectl apply -f kube-flannel.yml

3. View cs status

kubectl get cs

It is found that the status of controller-manager and scheduler is incorrect.

The reason is that the default port is 0, you need to comment out the - - port=0 of kube-controller-manager.yaml and kube-scheduler.yaml under /etc/kubernetes/manifests

# Restart the service after commenting
systemctl restart kubelet.service
# View component status
kubectl get cs

As shown in the figure above, it is successful;

At this point, the master node deployment is complete (in order not to waste resources, we generally need to allow the master node to deploy pod s, and you need to run the following commands)

kubectl taint nodes --all node-role.kubernetes.io/master-

2. Node node deployment

Just run the command provided when the node node initializes the master node;

kubeadm join 43.143.65.53:6443 --token 3ycdz8.v2g4csbh3p20qweg \
    --discovery-token-ca-cert-hash sha256:c23830bb3046cc14cdb4dd422eab6cda74f747648b79e09bb4d9fea8244c9e04

The default token validity period is 24 hours, after which the token is unavailable. At this time, you need to recreate the token, you can directly use the command to quickly generate:

kubeadm token create --print-join-command

3. Test

Wait for about 5 minutes to run on the master node

kubectl get nodes -o wide

You can see that node01 has changed from noready to ready

Create a pod in the Kubernetes cluster and verify that it is running:

kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get pod,svc

Access address: http://<IP of any Node>:Port , such as:

So far, k8s deployment is successful! ! !

3. Deploy the dashboard

Dashboard is an officially provided UI that can be used for basic management of K8s resources.

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml

If you can't download it, you can search the same version recommended.yaml by yourself;
It can also be downloaded directly from the cloud disk (extraction code: 69jd): recommended.yaml
PS: The files in the cloud disk have been modified. If you want to use them, you can skip the next step and apply directly.

By default, the Dashboard can only be accessed within the cluster. You can run the command vi recommended.yaml to change the Service type to NodePort to facilitate access by machines outside the cluster.

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30443
  selector:
    k8s-app: kubernetes-dashboard
  type: NodePort

kubectl apply -f recommended.yaml
kubectl get pods -n kubernetes-dashboard

After all pod s are in the running state, create a service account and bind the default cluster-admin administrator cluster role:

# create user
kubectl create serviceaccount dashboard-admin -n kube-system
# User authorization
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
# Get user Token
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

Access address: https://<IP of any Node>:30443, copy and fill in the token generated by the previous command to log in.


So far. The k8s deployment is all over.

PS: Thanks for the big guy article
How to deploy k8s cluster through kubeadm

Tags: Kubernetes Operation & Maintenance server

Posted by warik on Fri, 14 Oct 2022 18:19:02 +1030