Deploy a complete set of Kubernetes high availability cluster (binary, latest version v1.18)

Deploy a complete set of Kubernetes high availability cluster (binary, latest version v1.18)

Topping Li Zhenliang 2020-06-01 14:43:04  1377 Collection 14
Category column: Docker/K8S Article label: kubernetes docker etcd

If you encounter problems in your study or the document is wrong, you can contact a Liang ~ wechat: init1024

 

Article catalogue

 

1, Pre knowledge points

1.1 there are two ways to deploy Kubernetes cluster in production environment

At present, there are two main ways to deploy Kubernetes cluster in production:

  • kubeadm

Kubedm is a K8s deployment tool that provides kubedm init and kubedm join for rapid deployment of Kubernetes clusters.

Official address: https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

  • Binary package

Download the binary package of the distribution from github and manually deploy each component to form a Kubernetes cluster.

Kubedm lowered the deployment threshold, but shielded many details, making it difficult to troubleshoot problems. If you want to be more controllable, it is recommended to use binary packages to deploy Kubernetes clusters. Although manual deployment is troublesome, you can learn a lot of working principles during this period, which is also conducive to later maintenance.

1.2 installation requirements

Before starting, the deployment of Kubernetes cluster machines needs to meet the following conditions:

  • One or more machines, operating system centos7 x-86_ x64
  • Hardware configuration: 2GB or more RAM, 2 CPUs or more CPUs, 30GB or more hard disk
  • If you can access the Internet, you need to pull the image. If the server cannot access the Internet, you need to download the image in advance and import the node
  • Disable swap partition

1.3 preparation environment

Software environment:

Softwareedition
operating system CentOS7.8_x64 (mini)
Docker 19-ce
Kubernetes 1.18

Overall server planning:

roleIPassembly
k8s-master1 192.168.31.71 kube-apiserver,kube-controller-manager,kube-scheduler,etcd
k8s-master2 192.168.31.74 kube-apiserver,kube-controller-manager,kube-scheduler
k8s-node1 192.168.31.72 kubelet,kube-proxy,docker etcd
k8s-node2 192.168.31.73 kubelet,kube-proxy,docker,etcd
Load Balancer(Master) 192.168.31.81 ,192.168.31.88 (VIP) Nginx L4
Load Balancer(Backup) 192.168.31. 82 Nginx L4

Note: considering that some friends have low computer configuration and so many virtual machines can't run, this set of high availability cluster is implemented in two parts. First deploy a single Master architecture (192.168.31.71 / 72 / 73), and then expand to multi Master architecture (the above plan). By the way, get familiar with the Master expansion process.

Single Master architecture diagram:

Single Master server planning:

roleIPassembly
k8s-master 192.168.31.71 kube-apiserver,kube-controller-manager,kube-scheduler,etcd
k8s-node1 192.168.31.72 kubelet,kube-proxy,docker etcd
k8s-node2 192.168.31.73 kubelet,kube-proxy,docker,etcd

1.4 operating system initialization configuration

# Turn off the firewall
systemctl stop firewalld
systemctl disable firewalld

# Close selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config  # permanent
setenforce 0  # temporary

# Close swap
swapoff -a  # temporary
sed -ri 's/.*swap.*/#&/' /etc/fstab    # permanent

# Set the host name according to the plan
hostnamectl set-hostname <hostname>

# Add hosts in master
cat >> /etc/hosts << EOF
192.168.31.71 k8s-master
192.168.31.72 k8s-node1
192.168.31.73 k8s-node2
EOF

# The chain that passes bridged IPv4 traffic to iptables
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system  # take effect

# time synchronization 
yum install ntpdate -y
ntpdate time.windows.com
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32

2, Deploy Etcd cluster

Etcd is a distributed key value storage system. Kubernetes uses etcd for data storage, so first prepare an etcd database. In order to solve the single point of failure of etcd, cluster deployment should be adopted. Here, three clusters are used to build a cluster, which can tolerate one machine failure. Of course, you can also use five clusters to build a cluster, which can tolerate two machine failures.

Node nameIP
etcd-1 192.168.31.71
etcd-2 192.168.31.72
etcd-3 192.168.31.73

Note: in order to save machines, it is reused with k8s node machines. It can also be deployed independently of the k8s cluster, as long as the apiserver can connect to it.

2.1 prepare cfssl certificate generation tool

cfssl is an open source certificate management tool. It uses json files to generate certificates, which is more convenient to use than openssl.

Find any server to operate. Here, use the Master node.

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

2.2 generating Etcd certificate

Self signed certificate authority (CA 1)

Create working directory:

mkdir -p ~/TLS/{etcd,k8s}

cd TLS/etcd
  • 1
  • 2
  • 3

Self signed CA:

cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

cat > ca-csr.json << EOF
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37

Generate certificate:

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

ls *pem
ca-key.pem  ca.pem
  • 1
  • 2
  • 3
  • 4

2. Issue Etcd HTTPS certificate with self signed CA

To create a certificate request file:

cat > server-csr.json << EOF
{
    "CN": "etcd",
    "hosts": [
    "192.168.31.71",
    "192.168.31.72",
    "192.168.31.73"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}
EOF

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22

Note: the IP in the hosts field of the above file is the internal communication IP of the cluster of all etcd nodes, and none of them can be less! In order to facilitate the later expansion, you can write more reserved IP addresses.

Generate certificate:

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

ls server*pem
server-key.pem  server.pem
  • 1
  • 2
  • 3
  • 4

2.3 download binaries from Github

Download address: https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz

2.4 deploy Etcd cluster

The following operations are performed on node 1. To simplify the operation, all files generated by node 1 will be copied to node 2 and node 3 later

1. Create a working directory and unzip the binary package

mkdir /opt/etcd/{bin,cfg,ssl} -p
tar zxvf etcd-v3.4.9-linux-amd64.tar.gz
mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
  • 1
  • 2
  • 3

2. Create etcd configuration file

cat > /opt/etcd/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.31.71:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.31.71:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.31.71:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.31.71:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.31.71:2380,etcd-2=https://192.168.31.72:2380,etcd-3=https://192.168.31.73:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • ETCD_NAME: node name, unique in the cluster
  • ETCD_DATA_DIR: Data Directory
  • ETCD_LISTEN_PEER_URLS: cluster communication listening address
  • ETCD_LISTEN_CLIENT_URLS: client access listening address
  • ETCD_INITIAL_ADVERTISE_PEER_URLS: cluster notification address
  • ETCD_ADVERTISE_CLIENT_URLS: client notification address
  • ETCD_INITIAL_CLUSTER: cluster node address
  • ETCD_INITIAL_CLUSTER_TOKEN: cluster token
  • ETCD_INITIAL_CLUSTER_STATE: the current status of joining a cluster. New is a new cluster, and existing means joining an existing cluster

3. systemd management etcd

cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
--logger=zap
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24

4. Copy the certificate just generated

Copy the certificate just generated to the path in the configuration file:

cp ~/TLS/etcd/ca*pem ~/TLS/etcd/server*pem /opt/etcd/ssl/
  • 1

5. Start and set startup

systemctl daemon-reload
systemctl start etcd
systemctl enable etcd
  • 1
  • 2
  • 3

6. Copy all files generated by node 1 above to node 2 and node 3

scp -r /opt/etcd/ root@192.168.31.72:/opt/
scp /usr/lib/systemd/system/etcd.service root@192.168.31.72:/usr/lib/systemd/system/
scp -r /opt/etcd/ root@192.168.31.73:/opt/
scp /usr/lib/systemd/system/etcd.service root@192.168.31.73:/usr/lib/systemd/system/
  • 1
  • 2
  • 3
  • 4

Then modify etcd at node 2 and node 3, respectively Node name and current server IP in conf configuration file:

vi /opt/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd-1"   # Modify here to change node 2 to etcd-2 and node 3 to etcd-3
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.31.71:2380 "# modify the IP address of the current server
ETCD_LISTEN_CLIENT_URLS="https://192.168.31.71:2379 "# modify the IP address of the current server

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.31.71:2380 "# modify the IP address of the current server
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.31.71:2379 "# modify the IP address of the current server
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.31.71:2380,etcd-2=https://192.168.31.72:2380,etcd-3=https://192.168.31.73:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

Finally, start etcd and set startup, as above.

7. View cluster status

ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.31.71:2379,https://192.168.31.72:2379,https://192.168.31.73:2379" endpoint health

https://192.168.31.71:2379 is healthy: successfully committed proposal: took = 8.154404ms
https://192.168.31.73:2379 is healthy: successfully committed proposal: took = 9.044117ms
https://192.168.31.72:2379 is healthy: successfully committed proposal: took = 10.000825ms
  • 1
  • 2
  • 3
  • 4
  • 5

If the above information is output, the cluster deployment is successful. If there is a problem, the first step is to look at the log: / var/log/message or journalctl -u etcd

3, Install Docker

Download address: https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz

The following operations are performed on all nodes. Binary installation is adopted here, and so is yum installation.

3.1 decompress binary package

tar zxvf docker-19.03.9.tgz
mv docker/* /usr/bin
  • 1
  • 2

3.2 systemd management docker

cat > /usr/lib/systemd/system/docker.service << EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24

3.3 creating a profile

mkdir /etc/docker
cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • Registry mirrors Alibaba cloud image accelerator

3.4 start and set startup

systemctl daemon-reload
systemctl start docker
systemctl enable docker
  • 1
  • 2
  • 3

4, Deploy Master Node

If you encounter problems in your study or the document is wrong, you can contact a Liang ~ wechat: init1024

4.1 generate Kube apiserver certificate

Self signed certificate authority (CA 1)

cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF
cat > ca-csr.json << EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38

Generate certificate:

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

ls *pem
ca-key.pem  ca.pem
  • 1
  • 2
  • 3
  • 4

2. Issue Kube apiserver HTTPS certificate with self signed CA

To create a certificate request file:

cd TLS/k8s
cat > server-csr.json << EOF
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "192.168.31.71",
      "192.168.31.72",
      "192.168.31.73",
      "192.168.31.74",
      "192.168.31.81",
      "192.168.31.82",
      "192.168.31.88",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35

Note: the IP in the hosts field of the above file is all Master/LB/VIP IP, and none of them can be less! In order to facilitate the later expansion, you can write more reserved IP addresses.

Generate certificate:

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

ls server*pem
server-key.pem  server.pem
  • 1
  • 2
  • 3
  • 4

4.2 download binaries from Github

Download address: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1183

Note: open the link and you will find many packages in it. It is enough to download a server package, which contains the binary files of Master and Worker Node.

4.3 decompressing binary packages

mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs} 
tar zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin
cp kubectl /usr/bin/
  • 1
  • 2
  • 3
  • 4
  • 5

4.4 deploy Kube apiserver

1. Create a profile

cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--etcd-servers=https://192.168.31.71:2379,https://192.168.31.72:2379,https://192.168.31.73:2379 \\
--bind-address=192.168.31.71 \\
--secure-port=6443 \\
--advertise-address=192.168.31.71 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--enable-bootstrap-token-auth=true \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-32767 \\
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29

Note: the above two \ \, the first is the escape character and the second is the line feed character. The escape character is used to reserve the line feed character using EOF.

  • – logtostderr: enable logging
  • - v: log level
  • – log dir: log directory
  • – etcd servers: etcd cluster address
  • – bind address: listening address
  • – secure port: https secure port
  • – advertisement address: the advertisement address of the cluster
  • – allow privileged: enables authorization
  • – Service cluster IP range: Service virtual IP address segment
  • – enable admission plugins: admission control module
  • – authorization mode: authentication and authorization, enabling RBAC authorization and node self-management
  • – enable bootstrap token auth: enables the TLS bootstrap mechanism
  • – token auth file: bootstrap token file
  • – service node port range: the default assigned port range of Service nodeport type
  • – kubelet client XXX: apiserver access kubelet client certificate
  • – TLS XXX file: apiserver https certificate
  • – Etcd xfile: Certificate for connecting Etcd cluster
  • – audit log XXX: audit log

2. Copy the certificate just generated

Copy the certificate just generated to the path in the configuration file:

cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/kubernetes/ssl/
  • 1

3. Enable TLS Bootstrapping mechanism

TLS bootstrapping: after the Master apiserver enables TLS authentication, Node nodes kubelet and Kube proxy must use valid certificates issued by CA to communicate with Kube apiserver. When there are many Node nodes, this kind of client certificate issuance requires a lot of work, which will also increase the complexity of cluster expansion. In order to simplify the process, Kubernetes introduces TLS bootstrapping mechanism to automatically issue client certificates. Kubelet will automatically apply for certificates from apiserver as a low authority user, and kubelet's certificates are dynamically signed by apiserver. Therefore, it is strongly recommended to use this method on the Node. At present, it is mainly used for kubelet. Kube proxy is still issued by us.

TLS bootstrapping workflow:

Create the token file in the above configuration file:

cat > /opt/kubernetes/cfg/token.csv << EOF
c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"
EOF
  • 1
  • 2
  • 3

Format: token, user name, UID, user group

token can also be generated and replaced by itself:

head -c 16 /dev/urandom | od -An -t x | tr -d ' '
  • 1

4. systemd management apiserver

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

5. Start and set startup

systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver
  • 1
  • 2
  • 3

6. Authorize kubelet bootstrap users to request certificates

kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
  • 1
  • 2
  • 3

4.5 deploy Kube Controller Manager

1. Create a profile

cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect=true \\
--master=127.0.0.1:8080 \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.244.0.0/16 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s"
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • – master: connect to apiserver through local non secure local port 8080.
  • – leader select: when the component starts multiple, automatic election (HA)
  • – cluster signing cert file / – cluster signing key file: the CA that automatically issues certificates for kubelet, which is consistent with apiserver

2. systemd Controller Manager

cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

3. Start and set startup

systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager
  • 1
  • 2
  • 3

4.6 deploy Kube scheduler

1. Create a profile

cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1"
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • – master: connect to apiserver through local non secure local port 8080.
  • – leader select: when the component starts multiple, automatic election (HA)

2. systemd management scheduler

cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

3. Start and set startup

systemctl daemon-reload
systemctl start scheduler
systemctl enable scheduler
  • 1
  • 2
  • 3

4. View cluster status

All components have been started successfully. Check the current cluster component status through kubectl tool:

kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-2               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

The above output indicates that the Master node component operates normally.

5, Deploy Worker Node

If you encounter problems in your study or the document is wrong, you can contact a Liang ~ wechat: init1024

Next, operate on the Master Node, that is, as a Worker Node at the same time

5.1 create working directory and copy binary files

Create a working directory on all worker node s:

mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs} 
  • 1

Copy from master node:

cd kubernetes/server/bin
cp kubelet kube-proxy /opt/kubernetes/bin   # Local copy
  • 1
  • 2

5.2 deploying kubelet

1. Create a profile

cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--hostname-override=k8s-master \\
--network-plugin=cni \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=lizhenliang/pause-amd64:3.0"
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • – hostname override: display name, unique in the cluster
  • – network plugin: enable CNI
  • – kubeconfig: empty path, which will be generated automatically, and later used to connect to apiserver
  • – bootstrap kubeconfig: apply for a certificate from apiserver for the first time
  • – config: configuration parameter file
  • – cert dir: kubelet certificate generation directory
  • – Pod infra container image: manage the image of Pod network container

2. Configuration parameter file

cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local 
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/kubernetes/ssl/ca.pem 
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32

3. Generate bootstrap Kubeconfig file

KUBE_APISERVER="https://192.168.31.71:6443" # apiserver IP:PORT
TOKEN="c47ffb939f5ca36231d9e3121a252940" # And token Consistent in CSV

# Generate kubelet bootstrap kubeconfig configuration file
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig
kubectl config set-credentials "kubelet-bootstrap" \
  --token=${TOKEN} \
  --kubeconfig=bootstrap.kubeconfig
kubectl config set-context default \
  --cluster=kubernetes \
  --user="kubelet-bootstrap" \
  --kubeconfig=bootstrap.kubeconfig
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17

Copy to profile path:

cp bootstrap.kubeconfig /opt/kubernetes/cfg
  • 1

4. systemd management kubelet

cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14

5. Start and set startup

systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
  • 1
  • 2
  • 3

5.3 approve kubelet certificate application and join the cluster

# View kubelet certificate request
kubectl get csr
NAME                                                   AGE    SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-uCEGPOIiDdlLODKts8J658HrFq9CZ--K6M4G7bjhk8A   6m3s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

# Approval application
kubectl certificate approve node-csr-uCEGPOIiDdlLODKts8J658HrFq9CZ--K6M4G7bjhk8A

# View node
kubectl get node
NAME         STATUS     ROLES    AGE   VERSION
k8s-master   NotReady   <none>   7s    v1.18.3
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

Note: because the network plug-in has not been deployed, the node will not be ready NotReady

5.4 deploy Kube proxy

1. Create a profile

cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

2. Configuration parameter file

cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-master
clusterCIDR: 10.0.0.0/24
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

3. Generate Kube proxy Kubeconfig file

Generate Kube proxy certificate:

# Switch working directory
cd TLS/k8s

# Create certificate request file
cat > kube-proxy-csr.json << EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

# Generate certificate
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

ls kube-proxy*pem
kube-proxy-key.pem  kube-proxy.pem
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29

Generate kubeconfig file:

KUBE_APISERVER="https://192.168.31.71:6443"

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
  --client-certificate=./kube-proxy.pem \
  --client-key=./kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17

Copy to the specified path of the configuration file:

cp kube-proxy.kubeconfig /opt/kubernetes/cfg/
  • 1

4. systemd management Kube proxy

cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14

5. Start and set startup

systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy
  • 1
  • 2
  • 3

5.5 deploy CNI network

Prepare CNI binary files first:

Download address: https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz

Unzip the binary package and move it to the default working directory:

mkdir /opt/cni/bin
tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin
  • 1
  • 2

Deploy CNI network:

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
sed -i -r "s#quay.io/coreos/flannel:.*-amd64#lizhenliang/flannel:v0.12.0-amd64#g" kube-flannel.yml
  • 1
  • 2

The default image address cannot be accessed. It is changed to docker hub image warehouse.

kubectl apply -f kube-flannel.yml

kubectl get pods -n kube-system
NAME                          READY   STATUS    RESTARTS   AGE
kube-flannel-ds-amd64-2pc95   1/1     Running   0          72s

kubectl get node
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    <none>   41m   v1.18.3
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

Deploy the network plug-in and the Node is ready.

5.6 authorize apiserver to access kubelet

cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
      - pods/log
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOF

kubectl apply -f apiserver-to-kubelet-rbac.yaml
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38

5.7 newly added Worker Node

1. Copy the deployed Node related files to the new Node

In the master node, copy the files involved in the Worker Node to the new node 192.168.31.72/73

scp /opt/kubernetes root@192.168.31.72:/opt/

scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.31.72:/usr/lib/systemd/system

scp -r /opt/cni/ root@192.168.31.72:/opt/

scp /opt/kubernetes/ssl/ca.pem root@192.168.31.72:/opt/kubernetes/ssl
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

2. Delete kubelet certificate and kubeconfig file

rm /opt/kubernetes/cfg/kubelet.kubeconfig 
rm -f /opt/kubernetes/ssl/kubelet*
  • 1
  • 2

Note: these files are automatically generated after the certificate application is approved. Each Node is different and must be deleted and regenerated.

3. Modify host name

vi /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-node1

vi /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-node1
  • 1
  • 2
  • 3
  • 4
  • 5

4. Start and set startup

systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
systemctl start kube-proxy
systemctl enable kube-proxy
  • 1
  • 2
  • 3
  • 4
  • 5

5. Approve the new Node kubelet Certificate Application on the Master

kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-4zTjsaVSrhuyhIGqsefxzVoZDCNKei-aE2jyTP81Uro   89s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

kubectl certificate approve node-csr-4zTjsaVSrhuyhIGqsefxzVoZDCNKei-aE2jyTP81Uro
  • 1
  • 2
  • 3
  • 4
  • 5

6. View Node status

kubectl get node
NAME         STATUS     ROLES    AGE   VERSION
k8s-master   Ready      <none>   65m   v1.18.3
k8s-node1    Ready      <none>   12m   v1.18.3
k8s-node2    Ready      <none>   81s   v1.18.3
  • 1
  • 2
  • 3
  • 4
  • 5

Node2 (192.168.31.73) node is the same as above. Remember to modify the host name!

6, Deploy Dashboard and CoreDNS

6.1 deploying Dashboard

$ wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
  • 1

The default Dashboard can only be accessed inside the cluster. Modify the Service to NodePort type and expose it to the outside:

vi recommended.yaml
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  type: NodePort
  selector:
    k8s-app: kubernetes-dashboard

kubectl apply -f recommended.yaml
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
kubectl get pods,svc -n kubernetes-dashboard
NAME                                             READY   STATUS              RESTARTS   AGE
pod/dashboard-metrics-scraper-694557449d-z8gfb   1/1     Running             0          2m18s
pod/kubernetes-dashboard-9774cc786-q2gsx         1/1     Running		     0          2m19s

NAME                                TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
service/dashboard-metrics-scraper   ClusterIP   10.0.0.141   <none>        8000/TCP        2m19s
service/kubernetes-dashboard        NodePort    10.0.0.239   <none>        443:30001/TCP   2m19s
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

Access address: https://NodeIP:30001

Create a service account and bind the default cluster admin administrator cluster role:

kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
  • 1
  • 2
  • 3

Log in to the Dashboard using the output token.

6.2 deploying CoreDNS

CoreDNS is used for Service name resolution within the cluster.

kubectl apply -f coredns.yaml

kubectl get pods -n kube-system 
NAME                          READY   STATUS    RESTARTS   AGE
coredns-5ffbfd976d-j6shb      1/1     Running   0          32s
kube-flannel-ds-amd64-2pc95   1/1     Running   0          38m
kube-flannel-ds-amd64-7qhdx   1/1     Running   0          15m
kube-flannel-ds-amd64-99cr8   1/1     Running   0          26m
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

DNS resolution test:

kubectl run -it --rm dns-test --image=busybox:1.28.4 sh
If you don't see a command prompt, try pressing enter.

/ # nslookup kubernetes
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

No problem parsing.

7, High availability architecture (capacity expansion multi Master Architecture)

Kubernetes, as a container cluster system, realizes the self-healing ability of Pod fault through the health check + restart strategy. It realizes the distributed deployment of Pod through the scheduling algorithm, monitors the expected number of copies, and automatically starts Pod at the normal Node according to the Node failure state, realizing the high availability of the application layer.

For Kubernetes cluster, high availability should also include the following two aspects: high availability of Etcd database and high availability of Kubernetes Master component. For Etcd, we have used three nodes to form a cluster to achieve high availability. This section will explain and implement the high availability of the Master node.

The Master node plays the role of the Master control center and maintains the healthy working state of the whole cluster by continuously communicating with Kubelet and Kube proxy on the work node. If the Master node fails, you will not be able to use the kubectl tool or API for any cluster management.

The Master node mainly has three services Kube apiserver, Kube controller mansger and Kube scheduler. Among them, Kube controller mansger and Kube scheduler components have achieved high availability through the selection mechanism. Therefore, the Master high availability is mainly aimed at the Kube apiserver component, which provides services through HTTP API. Therefore, the high availability is similar to that of Web server. Just add a load balancer to balance its load, And it can be expanded horizontally.

Multi Master architecture diagram:

7.1 installing Docker

ditto.

7.2 deploy Master Node (192.168.31.74)

The content of the new Master is consistent with all operations of the deployed Master1 node. Therefore, we only need to copy all K8s files of Master1 node, and then modify the server IP and host name to start.

1. Create etcd certificate directory

Create etcd certificate directory in Master2 (192.168.31.74):

mkdir -p /opt/etcd/ssl
  • 1

2. Copy files (Master1 operation)

Copy all files and etcd certificates involved in K8s of Master1 node:

scp -r /opt/kubernetes root@192.168.31.74:/opt
scp -r /opt/cni/ root@192.168.31.74:/opt
scp -r /opt/etcd/ssl root@192.168.31.74:/opt/etcd
scp /usr/lib/systemd/system/kube* root@192.168.31.74:/usr/lib/systemd/system
scp /usr/bin/kubectl  root@192.168.31.74:/usr/bin
  • 1
  • 2
  • 3
  • 4
  • 5

3. Delete certificate file

Delete kubelet certificate and kubeconfig file:

rm -f /opt/kubernetes/cfg/kubelet.kubeconfig 
rm -f /opt/kubernetes/ssl/kubelet*
  • 1
  • 2

4. Modify the IP and host name of the configuration file

Modify apiserver, kubelet and Kube proxy configuration files to local IP:

vi /opt/kubernetes/cfg/kube-apiserver.conf 
...
--bind-address=192.168.31.74 \
--advertise-address=192.168.31.74 \
...

vi /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-master2

vi /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-master2
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

5. Startup setting startup

systemctl daemon-reload
systemctl start kube-apiserver
systemctl start kube-controller-manager
systemctl start kube-scheduler
systemctl start kubelet
systemctl start kube-proxy
systemctl enable kube-apiserver
systemctl enable kube-controller-manager
systemctl enable kube-scheduler
systemctl enable kubelet
systemctl enable kube-proxy
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

6. View cluster status

kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

7. Approve kubelet certificate application

kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-JYNknakEa_YpHz797oKaN-ZTk43nD51Zc9CJkBLcASU   85m   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

kubectl certificate approve node-csr-JYNknakEa_YpHz797oKaN-ZTk43nD51Zc9CJkBLcASU

kubectl get node
NAME          STATUS   ROLES    AGE   VERSION
k8s-master    Ready    <none>   34h   v1.18.3
k8s-master2   Ready    <none>   83m   v1.18.3
k8s-node1     Ready    <none>   33h   v1.18.3
k8s-node2     Ready    <none>   33h   v1.18.3
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

7.3 deploy Nginx load balancer

Kube apiserver high availability architecture diagram:

Software involved:

  • Keepalived is a mainstream high availability software, which realizes dual server hot standby based on VIP binding. In the above topology, keepalived mainly judges whether failover (VIP offset) is required according to the running state of Nginx. For example, when the main node of Nginx hangs up, VIP will be automatically bound to the standby node of Nginx, so as to ensure that VIP is always available and realize high availability of Nginx.

  • Nginx is a mainstream Web service and reverse proxy server. Here, four layers are used to balance the load of apiserver.

1. Install software package (active / standby)

 yum install epel-release -y
 yum install nginx keepalived -y
  • 1
  • 2

2. Nginx configuration file (active / standby)

cat > /etc/nginx/nginx.conf << "EOF"
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

# Four layer load balancing provides load balancing for two Master apiserver components
stream {

    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';

    access_log  /var/log/nginx/k8s-access.log  main;

    upstream k8s-apiserver {
       server 192.168.31.71:6443;   # Master1 APISERVER IP:PORT
       server 192.168.31.74:6443;   # Master2 APISERVER IP:PORT
    }
    
    server {
       listen 6443;
       proxy_pass k8s-apiserver;
    }
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    server {
        listen       80 default_server;
        server_name  _;

        location / {
        }
    }
}
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55

3. keepalived configuration file (Nginx Master)

cat > /etc/keepalived/keepalived.conf << EOF
global_defs { 
   notification_email { 
     acassen@firewall.loc 
     failover@firewall.loc 
     sysadmin@firewall.loc 
   } 
   notification_email_from Alexandre.Cassen@firewall.loc  
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30 
   router_id NGINX_MASTER
} 

vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 { 
    state MASTER 
    interface ens33
    virtual_router_id 51 # VRRP routing ID instance. Each instance is unique 
    priority 100    # Priority, standby server setting 90 
    advert_int 1    # Specify the notification interval of VRRP heartbeat packet, which is 1 second by default 
    authentication { 
        auth_type PASS      
        auth_pass 1111 
    }  
    # Virtual IP
    virtual_ipaddress { 
        192.168.31.88/24
    } 
    track_script {
        check_nginx
    } 
}
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • vrrp_script: Specifies the script to check nginx working status (judge whether to fail over according to nginx status)

  • virtual_ipaddress: virtual IP (VIP)

Check nginx status script:

cat > /etc/keepalived/check_nginx.sh  << "EOF"
#!/bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then
    exit 1
else
    exit 0
fi
EOF
chmod +x /etc/keepalived/check_nginx.sh
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

4. keepalived configuration file (Nginx Backup)

cat > /etc/keepalived/keepalived.conf << EOF
global_defs { 
   notification_email { 
     acassen@firewall.loc 
     failover@firewall.loc 
     sysadmin@firewall.loc 
   } 
   notification_email_from Alexandre.Cassen@firewall.loc  
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30 
   router_id NGINX_BACKUP
} 

vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 { 
    state BACKUP 
    interface ens33
    virtual_router_id 51 # VRRP routing ID instance. Each instance is unique 
    priority 90
    advert_int 1
    authentication { 
        auth_type PASS      
        auth_pass 1111 
    }  
    virtual_ipaddress { 
        192.168.31.88/24
    } 
    track_script {
        check_nginx
    } 
}
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35

Script for checking nginx running status in the above configuration file:

cat > /etc/keepalived/check_nginx.sh  << "EOF"
#!/bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then
    exit 1
else
    exit 0
fi
EOF
chmod +x /etc/keepalived/check_nginx.sh
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

Note: keepalived determines whether to fail over according to the status code returned by the script (0 is normal, non-0 is abnormal).

5. Start and set startup

systemctl daemon-reload
systemctl start nginx
systemctl start keepalived
systemctl enable nginx
systemctl enable keepalived
  • 1
  • 2
  • 3
  • 4
  • 5

6. Check the status of keepalived

ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:04:f7:2c brd ff:ff:ff:ff:ff:ff
    inet 192.168.31.80/24 brd 192.168.31.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.31.88/24 scope global secondary ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe04:f72c/64 scope link 
       valid_lft forever preferred_lft forever
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15

It can be seen that 192.168.31.88 virtual IP is bound to the ens33 network card, indicating that it works normally.

7. Nginx+Keepalived high availability test

Turn off the primary node Nginx and test whether the VIP drifts to the standby node server.

stay Nginx Master implement pkill nginx
 stay Nginx Backup,ip addr Command view successfully bound VIP. 
  • 1
  • 2

8. Access load balancer test

Find any node in the K8s cluster, use curl to view the K8s version test, and use VIP to access:

curl -k https://192.168.31.88:6443/version
{
  "major": "1",
  "minor": "18",
  "gitVersion": "v1.18.3",
  "gitCommit": "2e7996e3e2712684bc73f0dec0200d64eec7fe40",
  "gitTreeState": "clean",
  "buildDate": "2020-05-20T12:43:34Z",
  "goVersion": "go1.13.9",
  "compiler": "gc",
  "platform": "linux/amd64"
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

The K8s version information can be obtained correctly, indicating that the load balancer is set up normally. Data flow of the request: curl - > VIP (nginx) - > apiserver

You can also see the forwarding apiserver IP by viewing the Nginx log:

tail /var/log/nginx/k8s-access.log -f
192.168.31.81 192.168.31.71:6443 - [30/May/2020:11:15:10 +0800] 200 422
192.168.31.81 192.168.31.74:6443 - [30/May/2020:11:15:26 +0800] 200 422
  • 1
  • 2
  • 3

It's not over yet. There's the next key step.

7.4 modify all Worker Node connections

Imagine that although we have added Master2 and load balancer, we have expanded the capacity from a single Master architecture, that is to say, at present, all Node components are still connected to Master1. If we do not connect VIP to load balancer, the Master will still have a single point of failure.

Therefore, the next step is to change the connection apiserver IP in all Node component configuration files:

roleIP
k8s-master1 192.168.31.71
k8s-master2 192.168.31.74
k8s-node1 192.168.31.72
k8s-node2 192.168.31.73

That is, the nodes viewed through the kubectl get node command.

In all the above worker nodes:

sed -i 's#192.168.31.71:6443#192.168.31.88:6443#' /opt/kubernetes/cfg/*
systemctl restart kubelet
systemctl restart kube-proxy
  • 1
  • 2
  • 3

Check node status:

kubectl get node
NAME          STATUS   ROLES    AGE    VERSION
k8s-master    Ready    <none>   34h    v1.18.3
k8s-master2   Ready    <none>   101m   v1.18.3
k8s-node1     Ready    <none>   33h    v1.18.3
k8s-node2     Ready    <none>   33h    v1.18.3
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

So far, a complete set of Kubernetes high availability cluster deployment has been completed.

If you encounter problems in your study or the document is wrong, you can contact a Liang ~ wechat: init1024

>Author: a Liang

Note: because K8s version is updated faster, this document will be updated periodically, and the official account will be issued with updates.

If you encounter problems in your study or the document is wrong, you can contact a Liang ~ wechat: init1024

 

Article catalogue

 

1, Pre knowledge points

1.1 there are two ways to deploy Kubernetes cluster in production environment

At present, there are two main ways to deploy Kubernetes cluster in production:

  • kubeadm

Kubedm is a K8s deployment tool that provides kubedm init and kubedm join for rapid deployment of Kubernetes clusters.

Official address: https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

  • Binary package

Download the binary package of the distribution from github and manually deploy each component to form a Kubernetes cluster.

Kubedm lowered the deployment threshold, but shielded many details, making it difficult to troubleshoot problems. If you want to be more controllable, it is recommended to use binary packages to deploy Kubernetes clusters. Although manual deployment is troublesome, you can learn a lot of working principles during this period, which is also conducive to later maintenance.

1.2 installation requirements

Before starting, the deployment of Kubernetes cluster machines needs to meet the following conditions:

  • One or more machines, operating system centos7 x-86_ x64
  • Hardware configuration: 2GB or more RAM, 2 CPUs or more CPUs, 30GB or more hard disk
  • If you can access the Internet, you need to pull the image. If the server cannot access the Internet, you need to download the image in advance and import the node
  • Disable swap partition

1.3 preparation environment

Software environment:

Softwareedition
operating system CentOS7.8_x64 (mini)
Docker 19-ce
Kubernetes 1.18

Overall server planning:

roleIPassembly
k8s-master1 192.168.31.71 kube-apiserver,kube-controller-manager,kube-scheduler,etcd
k8s-master2 192.168.31.74 kube-apiserver,kube-controller-manager,kube-scheduler
k8s-node1 192.168.31.72 kubelet,kube-proxy,docker etcd
k8s-node2 192.168.31.73 kubelet,kube-proxy,docker,etcd
Load Balancer(Master) 192.168.31.81 ,192.168.31.88 (VIP) Nginx L4
Load Balancer(Backup) 192.168.31. 82 Nginx L4

Note: considering that some friends have low computer configuration and so many virtual machines can't run, this set of high availability cluster is implemented in two parts. First deploy a single Master architecture (192.168.31.71 / 72 / 73), and then expand to multi Master architecture (the above plan). By the way, get familiar with the Master expansion process.

Single Master architecture diagram:

Single Master server planning:

roleIPassembly
k8s-master 192.168.31.71 kube-apiserver,kube-controller-manager,kube-scheduler,etcd
k8s-node1 192.168.31.72 kubelet,kube-proxy,docker etcd
k8s-node2 192.168.31.73 kubelet,kube-proxy,docker,etcd

1.4 operating system initialization configuration

# Turn off the firewall
systemctl stop firewalld
systemctl disable firewalld

# Close selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config  # permanent
setenforce 0  # temporary

# Close swap
swapoff -a  # temporary
sed -ri 's/.*swap.*/#&/' /etc/fstab    # permanent

# Set the host name according to the plan
hostnamectl set-hostname <hostname>

# Add hosts in master
cat >> /etc/hosts << EOF
192.168.31.71 k8s-master
192.168.31.72 k8s-node1
192.168.31.73 k8s-node2
EOF

# The chain that passes bridged IPv4 traffic to iptables
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system  # take effect

# time synchronization 
yum install ntpdate -y
ntpdate time.windows.com
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32

2, Deploy Etcd cluster

Etcd is a distributed key value storage system. Kubernetes uses etcd for data storage, so first prepare an etcd database. In order to solve the single point of failure of etcd, cluster deployment should be adopted. Here, three clusters are used to build a cluster, which can tolerate one machine failure. Of course, you can also use five clusters to build a cluster, which can tolerate two machine failures.

Node nameIP
etcd-1 192.168.31.71
etcd-2 192.168.31.72
etcd-3 192.168.31.73

Note: in order to save machines, it is reused with k8s node machines. It can also be deployed independently of the k8s cluster, as long as the apiserver can connect to it.

2.1 prepare cfssl certificate generation tool

cfssl is an open source certificate management tool. It uses json files to generate certificates, which is more convenient to use than openssl.

Find any server to operate. Here, use the Master node.

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

2.2 generating Etcd certificate

Self signed certificate authority (CA 1)

Create working directory:

mkdir -p ~/TLS/{etcd,k8s}

cd TLS/etcd
  • 1
  • 2
  • 3

Self signed CA:

cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

cat > ca-csr.json << EOF
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37

Generate certificate:

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

ls *pem
ca-key.pem  ca.pem
  • 1
  • 2
  • 3
  • 4

2. Issue Etcd HTTPS certificate with self signed CA

To create a certificate request file:

cat > server-csr.json << EOF
{
    "CN": "etcd",
    "hosts": [
    "192.168.31.71",
    "192.168.31.72",
    "192.168.31.73"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}
EOF

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22

Note: the IP in the hosts field of the above file is the internal communication IP of the cluster of all etcd nodes, and none of them can be less! In order to facilitate the later expansion, you can write more reserved IP addresses.

Generate certificate:

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

ls server*pem
server-key.pem  server.pem
  • 1
  • 2
  • 3
  • 4

2.3 download binaries from Github

Download address: https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz

2.4 deploy Etcd cluster

The following operations are performed on node 1. To simplify the operation, all files generated by node 1 will be copied to node 2 and node 3 later

1. Create a working directory and unzip the binary package

mkdir /opt/etcd/{bin,cfg,ssl} -p
tar zxvf etcd-v3.4.9-linux-amd64.tar.gz
mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
  • 1
  • 2
  • 3

2. Create etcd configuration file

cat > /opt/etcd/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.31.71:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.31.71:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.31.71:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.31.71:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.31.71:2380,etcd-2=https://192.168.31.72:2380,etcd-3=https://192.168.31.73:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • ETCD_NAME: node name, unique in the cluster
  • ETCD_DATA_DIR: Data Directory
  • ETCD_LISTEN_PEER_URLS: cluster communication listening address
  • ETCD_LISTEN_CLIENT_URLS: client access listening address
  • ETCD_INITIAL_ADVERTISE_PEER_URLS: cluster notification address
  • ETCD_ADVERTISE_CLIENT_URLS: client notification address
  • ETCD_INITIAL_CLUSTER: cluster node address
  • ETCD_INITIAL_CLUSTER_TOKEN: cluster token
  • ETCD_INITIAL_CLUSTER_STATE: the current status of joining a cluster. New is a new cluster, and existing means joining an existing cluster

3. systemd management etcd

cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
--logger=zap
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24

4. Copy the certificate just generated

Copy the certificate just generated to the path in the configuration file:

cp ~/TLS/etcd/ca*pem ~/TLS/etcd/server*pem /opt/etcd/ssl/
  • 1

5. Start and set startup

systemctl daemon-reload
systemctl start etcd
systemctl enable etcd
  • 1
  • 2
  • 3

6. Copy all files generated by node 1 above to node 2 and node 3

scp -r /opt/etcd/ root@192.168.31.72:/opt/
scp /usr/lib/systemd/system/etcd.service root@192.168.31.72:/usr/lib/systemd/system/
scp -r /opt/etcd/ root@192.168.31.73:/opt/
scp /usr/lib/systemd/system/etcd.service root@192.168.31.73:/usr/lib/systemd/system/
  • 1
  • 2
  • 3
  • 4

Then modify etcd at node 2 and node 3, respectively Node name and current server IP in conf configuration file:

vi /opt/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd-1"   # Modify here to change node 2 to etcd-2 and node 3 to etcd-3
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.31.71:2380 "# modify the IP address of the current server
ETCD_LISTEN_CLIENT_URLS="https://192.168.31.71:2379 "# modify the IP address of the current server

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.31.71:2380 "# modify the IP address of the current server
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.31.71:2379 "# modify the IP address of the current server
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.31.71:2380,etcd-2=https://192.168.31.72:2380,etcd-3=https://192.168.31.73:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

Finally, start etcd and set startup, as above.

7. View cluster status

ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.31.71:2379,https://192.168.31.72:2379,https://192.168.31.73:2379" endpoint health

https://192.168.31.71:2379 is healthy: successfully committed proposal: took = 8.154404ms
https://192.168.31.73:2379 is healthy: successfully committed proposal: took = 9.044117ms
https://192.168.31.72:2379 is healthy: successfully committed proposal: took = 10.000825ms
  • 1
  • 2
  • 3
  • 4
  • 5

If the above information is output, the cluster deployment is successful. If there is a problem, the first step is to look at the log: / var/log/message or journalctl -u etcd

3, Install Docker

Download address: https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz

The following operations are performed on all nodes. Binary installation is adopted here, and so is yum installation.

3.1 decompress binary package

tar zxvf docker-19.03.9.tgz
mv docker/* /usr/bin
  • 1
  • 2

3.2 systemd management docker

cat > /usr/lib/systemd/system/docker.service << EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24

3.3 creating a profile

mkdir /etc/docker
cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • Registry mirrors Alibaba cloud image accelerator

3.4 start and set startup

systemctl daemon-reload
systemctl start docker
systemctl enable docker
  • 1
  • 2
  • 3

4, Deploy Master Node

If you encounter problems in your study or the document is wrong, you can contact a Liang ~ wechat: init1024

4.1 generate Kube apiserver certificate

Self signed certificate authority (CA 1)

cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF
cat > ca-csr.json << EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38

Generate certificate:

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

ls *pem
ca-key.pem  ca.pem
  • 1
  • 2
  • 3
  • 4

2. Issue Kube apiserver HTTPS certificate with self signed CA

To create a certificate request file:

cd TLS/k8s
cat > server-csr.json << EOF
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "192.168.31.71",
      "192.168.31.72",
      "192.168.31.73",
      "192.168.31.74",
      "192.168.31.81",
      "192.168.31.82",
      "192.168.31.88",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35

Note: the IP in the hosts field of the above file is all Master/LB/VIP IP, and none of them can be less! In order to facilitate the later expansion, you can write more reserved IP addresses.

Generate certificate:

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

ls server*pem
server-key.pem  server.pem
  • 1
  • 2
  • 3
  • 4

4.2 download binaries from Github

Download address: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1183

Note: open the link and you will find many packages in it. It is enough to download a server package, which contains the binary files of Master and Worker Node.

4.3 decompressing binary packages

mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs} 
tar zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin
cp kubectl /usr/bin/
  • 1
  • 2
  • 3
  • 4
  • 5

4.4 deploy Kube apiserver

1. Create a profile

cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--etcd-servers=https://192.168.31.71:2379,https://192.168.31.72:2379,https://192.168.31.73:2379 \\
--bind-address=192.168.31.71 \\
--secure-port=6443 \\
--advertise-address=192.168.31.71 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--enable-bootstrap-token-auth=true \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-32767 \\
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29

Note: the above two \ \, the first is the escape character and the second is the line feed character. The escape character is used to reserve the line feed character using EOF.

  • – logtostderr: enable logging
  • - v: log level
  • – log dir: log directory
  • – etcd servers: etcd cluster address
  • – bind address: listening address
  • – secure port: https secure port
  • – advertisement address: the advertisement address of the cluster
  • – allow privileged: enables authorization
  • – Service cluster IP range: Service virtual IP address segment
  • – enable admission plugins: admission control module
  • – authorization mode: authentication and authorization, enabling RBAC authorization and node self-management
  • – enable bootstrap token auth: enables the TLS bootstrap mechanism
  • – token auth file: bootstrap token file
  • – service node port range: the default assigned port range of Service nodeport type
  • – kubelet client XXX: apiserver access kubelet client certificate
  • – TLS XXX file: apiserver https certificate
  • – Etcd xfile: Certificate for connecting Etcd cluster
  • – audit log XXX: audit log

2. Copy the certificate just generated

Copy the certificate just generated to the path in the configuration file:

cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/kubernetes/ssl/
  • 1

3. Enable TLS Bootstrapping mechanism

TLS bootstrapping: after the Master apiserver enables TLS authentication, Node nodes kubelet and Kube proxy must use valid certificates issued by CA to communicate with Kube apiserver. When there are many Node nodes, this kind of client certificate issuance requires a lot of work, which will also increase the complexity of cluster expansion. In order to simplify the process, Kubernetes introduces TLS bootstrapping mechanism to automatically issue client certificates. Kubelet will automatically apply for certificates from apiserver as a low authority user, and kubelet's certificates are dynamically signed by apiserver. Therefore, it is strongly recommended to use this method on the Node. At present, it is mainly used for kubelet. Kube proxy is still issued by us.

TLS bootstrapping workflow:

Create the token file in the above configuration file:

cat > /opt/kubernetes/cfg/token.csv << EOF
c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"
EOF
  • 1
  • 2
  • 3

Format: token, user name, UID, user group

token can also be generated and replaced by itself:

head -c 16 /dev/urandom | od -An -t x | tr -d ' '
  • 1

4. systemd management apiserver

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

5. Start and set startup

systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver
  • 1
  • 2
  • 3

6. Authorize kubelet bootstrap users to request certificates

kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
  • 1
  • 2
  • 3

4.5 deploy Kube Controller Manager

1. Create a profile

cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect=true \\
--master=127.0.0.1:8080 \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.244.0.0/16 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s"
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • – master: connect to apiserver through local non secure local port 8080.
  • – leader select: when the component starts multiple, automatic election (HA)
  • – cluster signing cert file / – cluster signing key file: the CA that automatically issues certificates for kubelet, which is consistent with apiserver

2. systemd Controller Manager

cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

3. Start and set startup

systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager
  • 1
  • 2
  • 3

4.6 deploy Kube scheduler

1. Create a profile

cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1"
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • – master: connect to apiserver through local non secure local port 8080.
  • – leader select: when the component starts multiple, automatic election (HA)

2. systemd management scheduler

cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

3. Start and set startup

systemctl daemon-reload
systemctl start scheduler
systemctl enable scheduler
  • 1
  • 2
  • 3

4. View cluster status

All components have been started successfully. Check the current cluster component status through kubectl tool:

kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-2               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

The above output indicates that the Master node component operates normally.

5, Deploy Worker Node

If you encounter problems in your study or the document is wrong, you can contact a Liang ~ wechat: init1024

Next, operate on the Master Node, that is, as a Worker Node at the same time

5.1 create working directory and copy binary files

Create a working directory on all worker node s:

mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs} 
  • 1

Copy from master node:

cd kubernetes/server/bin
cp kubelet kube-proxy /opt/kubernetes/bin   # Local copy
  • 1
  • 2

5.2 deploying kubelet

1. Create a profile

cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--hostname-override=k8s-master \\
--network-plugin=cni \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=lizhenliang/pause-amd64:3.0"
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • – hostname override: display name, unique in the cluster
  • – network plugin: enable CNI
  • – kubeconfig: empty path, which will be generated automatically, and later used to connect to apiserver
  • – bootstrap kubeconfig: apply for a certificate from apiserver for the first time
  • – config: configuration parameter file
  • – cert dir: kubelet certificate generation directory
  • – Pod infra container image: manage the image of Pod network container

2. Configuration parameter file

cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local 
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/kubernetes/ssl/ca.pem 
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32

3. Generate bootstrap Kubeconfig file

KUBE_APISERVER="https://192.168.31.71:6443" # apiserver IP:PORT
TOKEN="c47ffb939f5ca36231d9e3121a252940" # And token Consistent in CSV

# Generate kubelet bootstrap kubeconfig configuration file
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig
kubectl config set-credentials "kubelet-bootstrap" \
  --token=${TOKEN} \
  --kubeconfig=bootstrap.kubeconfig
kubectl config set-context default \
  --cluster=kubernetes \
  --user="kubelet-bootstrap" \
  --kubeconfig=bootstrap.kubeconfig
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17

Copy to profile path:

cp bootstrap.kubeconfig /opt/kubernetes/cfg
  • 1

4. systemd management kubelet

cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14

5. Start and set startup

systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
  • 1
  • 2
  • 3

5.3 approve kubelet certificate application and join the cluster

# View kubelet certificate request
kubectl get csr
NAME                                                   AGE    SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-uCEGPOIiDdlLODKts8J658HrFq9CZ--K6M4G7bjhk8A   6m3s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

# Approval application
kubectl certificate approve node-csr-uCEGPOIiDdlLODKts8J658HrFq9CZ--K6M4G7bjhk8A

# View node
kubectl get node
NAME         STATUS     ROLES    AGE   VERSION
k8s-master   NotReady   <none>   7s    v1.18.3
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

Note: because the network plug-in has not been deployed, the node will not be ready NotReady

5.4 deploy Kube proxy

1. Create a profile

cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

2. Configuration parameter file

cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-master
clusterCIDR: 10.0.0.0/24
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

3. Generate Kube proxy Kubeconfig file

Generate Kube proxy certificate:

# Switch working directory
cd TLS/k8s

# Create certificate request file
cat > kube-proxy-csr.json << EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

# Generate certificate
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

ls kube-proxy*pem
kube-proxy-key.pem  kube-proxy.pem
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29

Generate kubeconfig file:

KUBE_APISERVER="https://192.168.31.71:6443"

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
  --client-certificate=./kube-proxy.pem \
  --client-key=./kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17

Copy to the specified path of the configuration file:

cp kube-proxy.kubeconfig /opt/kubernetes/cfg/
  • 1

4. systemd management Kube proxy

cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14

5. Start and set startup

systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy
  • 1
  • 2
  • 3

5.5 deploy CNI network

Prepare CNI binary files first:

Download address: https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz

Unzip the binary package and move it to the default working directory:

mkdir /opt/cni/bin
tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin
  • 1
  • 2

Deploy CNI network:

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
sed -i -r "s#quay.io/coreos/flannel:.*-amd64#lizhenliang/flannel:v0.12.0-amd64#g" kube-flannel.yml
  • 1
  • 2

The default image address cannot be accessed. It is changed to docker hub image warehouse.

kubectl apply -f kube-flannel.yml

kubectl get pods -n kube-system
NAME                          READY   STATUS    RESTARTS   AGE
kube-flannel-ds-amd64-2pc95   1/1     Running   0          72s

kubectl get node
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    <none>   41m   v1.18.3
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

Deploy the network plug-in and the Node is ready.

5.6 authorize apiserver to access kubelet

cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
      - pods/log
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOF

kubectl apply -f apiserver-to-kubelet-rbac.yaml
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38

5.7 newly added Worker Node

1. Copy the deployed Node related files to the new Node

In the master node, copy the files involved in the Worker Node to the new node 192.168.31.72/73

scp /opt/kubernetes root@192.168.31.72:/opt/

scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.31.72:/usr/lib/systemd/system

scp -r /opt/cni/ root@192.168.31.72:/opt/

scp /opt/kubernetes/ssl/ca.pem root@192.168.31.72:/opt/kubernetes/ssl
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

2. Delete kubelet certificate and kubeconfig file

rm /opt/kubernetes/cfg/kubelet.kubeconfig 
rm -f /opt/kubernetes/ssl/kubelet*
  • 1
  • 2

Note: these files are automatically generated after the certificate application is approved. Each Node is different and must be deleted and regenerated.

3. Modify host name

vi /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-node1

vi /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-node1
  • 1
  • 2
  • 3
  • 4
  • 5

4. Start and set startup

systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
systemctl start kube-proxy
systemctl enable kube-proxy
  • 1
  • 2
  • 3
  • 4
  • 5

5. Approve the new Node kubelet Certificate Application on the Master

kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-4zTjsaVSrhuyhIGqsefxzVoZDCNKei-aE2jyTP81Uro   89s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

kubectl certificate approve node-csr-4zTjsaVSrhuyhIGqsefxzVoZDCNKei-aE2jyTP81Uro
  • 1
  • 2
  • 3
  • 4
  • 5

6. View Node status

kubectl get node
NAME         STATUS     ROLES    AGE   VERSION
k8s-master   Ready      <none>   65m   v1.18.3
k8s-node1    Ready      <none>   12m   v1.18.3
k8s-node2    Ready      <none>   81s   v1.18.3
  • 1
  • 2
  • 3
  • 4
  • 5

Node2 (192.168.31.73) node is the same as above. Remember to modify the host name!

6, Deploy Dashboard and CoreDNS

6.1 deploying Dashboard

$ wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
  • 1

The default Dashboard can only be accessed inside the cluster. Modify the Service to NodePort type and expose it to the outside:

vi recommended.yaml
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  type: NodePort
  selector:
    k8s-app: kubernetes-dashboard

kubectl apply -f recommended.yaml
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
kubectl get pods,svc -n kubernetes-dashboard
NAME                                             READY   STATUS              RESTARTS   AGE
pod/dashboard-metrics-scraper-694557449d-z8gfb   1/1     Running             0          2m18s
pod/kubernetes-dashboard-9774cc786-q2gsx         1/1     Running		     0          2m19s

NAME                                TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
service/dashboard-metrics-scraper   ClusterIP   10.0.0.141   <none>        8000/TCP        2m19s
service/kubernetes-dashboard        NodePort    10.0.0.239   <none>        443:30001/TCP   2m19s
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

Access address: https://NodeIP:30001

Create a service account and bind the default cluster admin administrator cluster role:

kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
  • 1
  • 2
  • 3

Log in to the Dashboard using the output token.

6.2 deploying CoreDNS

CoreDNS is used for Service name resolution within the cluster.

kubectl apply -f coredns.yaml

kubectl get pods -n kube-system 
NAME                          READY   STATUS    RESTARTS   AGE
coredns-5ffbfd976d-j6shb      1/1     Running   0          32s
kube-flannel-ds-amd64-2pc95   1/1     Running   0          38m
kube-flannel-ds-amd64-7qhdx   1/1     Running   0          15m
kube-flannel-ds-amd64-99cr8   1/1     Running   0          26m
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

DNS resolution test:

kubectl run -it --rm dns-test --image=busybox:1.28.4 sh
If you don't see a command prompt, try pressing enter.

/ # nslookup kubernetes
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

No problem parsing.

So far, the deployment of single Master cluster has been completed, and the next article will expand to multi Master cluster~

Deploy a complete set of Kubernetes high availability cluster (binary, latest version v1.18)

Tags: Kubernetes

Posted by anindya23 on Tue, 19 Apr 2022 01:29:53 +0930