Environmental overview and description
-
In fact, this practice is just the content of the exam, but the exam is also a kind of experience, isn't it?, Before, I only learned knowledge. Only when there are problems and needs can I know whether what I learned is useful. Like these problems, to tell the truth, it seems not difficult to open them, but it really doesn't seem easy to do. Let's experience them together.
Console-20 is a console virtual machine. The following ek8s are different clusters. All operations are completed on console-20, and only one console-20 terminal is opened
-
tab can be used for setting
Use the student user to log in to console-20 with the password ha001
[student@vms20 ~]$ sudo -i [root@vms20 ~]# kubectl --help | grep bash completion Output shell completion code for the specified shell (bash or zsh) [root@vms20 ~]# vi /etc/profile [root@vms20 ~]# head -n2 /etc/profile # /etc/profile source <(kubectl completion bash) [root@vms20 ~]# [root@vms20 ~]# exit Logout [student@vms20 ~]$ source /etc/profile #Exit is executed again because if it is in rootsource, it is effective for root.
- In the following, 1.4% means the first question, which has 4 points [total score 100], k8s is the name of the cluster
- Context is an introduction
- Task is the topic
1.4% k8s √
Setting the configuration environment kubectl config use context k8s
Context
Create a new ClusterRole for the deployment pipeline and bind it to a specific with a specific namespace scope
ServiceAccount
Task
Create a new ClusterRole with the name deployment ClusterRole and only the following resource types are allowed to be created:
Deployment
StatefulSet
DaemonSet
Create a new ServiceAccount named cicd token in the existing namespace app-team1.
Limited to namespace app-team1, bind the new ClusterRole deployment ClusterRole to the new serviceaccount cicd token. [although the new ClusterRole is bound, we know that the Cluster is global, and it is specified in the namespace app-team1, so it is actually creating a role]
[student@vms20 ~]$ kubectl config use-context k8s [student@vms20 ~]$ [student@vms20 ~]$ kubectl create clusterrole deployment-clusterrole --verb=create --resource=deployment,statefullset,daemonset the server doesn't have a resource type "statefullset" [student@vms20 ~]$ kubectl create serviceaccount cicd-token -n app-team1 serviceaccount/cicd-token created [student@vms20 ~]$ [student@vms20 ~]$ kubectl create rolebinding rbind1 --clusterrole=deployment-clusterrole --serviceaccount=app-team1:cicd-token -n app-team1 rolebinding.rbac.authorization.k8s.io/rbind1 created [student@vms20 ~]$ clusterrole=deployment-clusterrole Specify clusterrole name serviceaccount=app-team1:cicd-token Pass on to=Which namespace:which one? sa
★2.4% ek8s
Setting the configuration environment kubectl config use context ek8s
Task
Set the node named ek8s-node-0 (vms25) to unavailable and reschedule all running pods on the node.
[student@vms20 ~]$ kubectl config use-context ek8s TaskSwitched to context "ek8s". [student@vms20 ~]$ [student@vms20 ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION vms24.rhce.cc Ready control-plane,master 28d v1.22.2 vms25.rhce.cc Ready <none> 28d v1.22.2 vms26.rhce.cc NotReady <none> 28d v1.22.2 [student@vms20 ~]$ [student@vms20 ~]$ kubectl drain vms25.rhce.cc --ignore-daemonsets node/vms25.rhce.cc cordoned WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-4x98t, kube-system/kube-proxy-hhwd8 evicting pod kube-system/coredns-7f6cbbb7b8-mg4jv evicting pod kube-system/coredns-7f6cbbb7b8-k46zd pod/coredns-7f6cbbb7b8-mg4jv evicted pod/coredns-7f6cbbb7b8-k46zd evicted node/vms25.rhce.cc evicted [student@vms20 ~]$
Note: if the direct draw operation above fails [don't worry about it normally], then add the prompt: - delete emptydir data. If - delete... Still doesn't work, add -- force
★3.7% mk8s
Set the configuration environment kubectl config use context mk8s
Task
The existing kubernetes cluster is running version 1.21 1. Only all kubernetes on the master node control plane
And node components upgrade to version 1.21 2.
In addition, upgrade kubelet and kubectl on the master node.
Make sure to draw the master node before upgrading and uncordon the master node after upgrading. Please do not upgrade the work node, etcd,
container manager, CNI plug-in, DNS service or any other plug-in.
Enter mk8s the master node of the cluster
This needs to be operated under the root user of the master node
[student@vms20 ~]$ kubectl config use-context mk8s Switched to context "mk8s". [student@vms20 ~]$ [student@vms20 ~]$ kubectl get ndoes error: the server doesn't have a resource type "ndoes" [student@vms20 ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION vms28.rhce.cc Ready control-plane,master 28d v1.22.1 vms29.rhce.cc Ready <none> 28d v1.22.1 [student@vms20 ~]$ [student@vms20 ~]$ ssh vms28.rhce.cc student@vms28:~$ sudo -i root@vms28:~#
Search the official documentation for upgrade and upgrade kubedm
- k8s official website
- Search upgrade and find: upgrade kubedm cluster in Chinese
- Upgrade kubedm
- Because it is difficult to use the console during the exam [and it is the ubantu system], copying one by one is easy to cause problems, so we can use the Notepad to edit it first
Copy the code inside and change the version to the version number to be upgraded. As follows, I need to upgrade to 1.21 2, so get the final line of code
root@vms28:~# cat a.txt apt-mark unhold kubeadm && apt-get update && apt-get install -y kubeadm=1.21.2-00 && apt-mark hold kubeadm root@vms28:~# # Then copy the code and execute it root@vms28:~# apt-mark unhold kubeadm && apt-get update && apt-get install -y kubeadm=1.21.2-00 && apt-mark hold kubeadm ...Mass output
- If the above upgrade reports an error, do the following
Execute mount | grep lxcfs and all umount s starting with / proc
Set drain operation and upgrade build
- As shown below, I reported an error directly by drawing, and added the prompt content - ignore daemonsets to succeed
root@vms28:~# kubectl drain vms28.rhce.cc node/vms28.rhce.cc cordoned DEPRECATED WARNING: Aborting the drain command in a list of nodes will be deprecated in v1.23. The new behavior will make the drain command go through all nodes even if one or more nodes failed during the drain. For now, users can try such experience via: --ignore-errors error: unable to drain node "vms28.rhce.cc", aborting command... There are pending nodes to be drained: vms28.rhce.cc error: cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-system/calico-node-gvsks, kube-system/kube-proxy-t9tj5 root@vms28:~# kubectl drain vms28.rhce.cc --ignore-daemonsets node/vms28.rhce.cc already cordoned WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-gvsks, kube-system/kube-proxy-t9tj5 node/vms28.rhce.cc drained root@vms28:~#
- Upgrade build
- Because there is no need to upgrade etcd, you need to add the parameter -- etcd upgrade = false
root@vms28:~# kubeadm upgrade apply 1.21.2 --etcd-upgrade=false ... # Automatically download the required image #When prompted, enter y and press enter
Cancel drain and upgrade kubelet and kubectl
- Cancel drain
root@vms28:~# kubectl uncordon vms28.rhce.cc node/vms28.rhce.cc uncordoned root@vms28:~#
- Upgrade kubelet
Go to the official website to find the code for upgrading kubelet and kubectl, edit any name in vim on the node, change these codes into one line, and modify the version number to the content you need to upgrade, and then copy this whole line of code
root@vms28:~# cat a.txt apt-mark unhold kubelet kubectl && apt-get update && apt-get install -y kubelet=1.21.2-00 kubectl=1.21.2-00 && apt-mark hold kubelet kubectl root@vms28:~# root@vms28:~# apt-mark unhold kubelet kubectl && apt-get update && apt-get install -y kubelet=1.21.2-00 kubectl=1.21.2-00 && apt-mark hold kubelet kubectl ...Mass output Automatic download kubelet and kubectl package
- Restart kubelet service
root@vms28:~# systemctl daemon-reload ; systemctl restart kubelet root@vms28:~#
- The upgrade is now complete
If there is no change, just wait a minute
root@vms28:~# kubectl get nodes NAME STATUS ROLES AGE VERSION vms28.rhce.cc Ready control-plane,master 28d v1.22.1 vms29.rhce.cc Ready <none> 28d v1.22.1 root@vms28:~# root@vms28:~# kubectl get nodes NAME STATUS ROLES AGE VERSION vms28.rhce.cc Ready control-plane,master 28d v1.22.2 vms29.rhce.cc Ready <none> 28d v1.22.1 root@vms28:~#
4.7%
There is no need to change the configuration environment for this project [this is done in the console node]
Task
First, run in https://127.0.0.1:2379 Create a snapshot of an existing etcd instance on and save the snapshot to
/srv/data/etcd-snapshot.db.
Creating a snapshot for a given instance is expected to complete in a few seconds. If the operation appears to hang, there may be a problem with the command. Use ctrl+c to cancel the operation and try again.
Then restore to / SRV / data / etcd snapshot previous Existing previous snapshot of DB.
The following TLS certificate and key are provided to connect to the server through etcdctl.
CA certificate: / opt/KUIN00601/ca.crt
Client certificate: / opt / kuin00601 / etcd client crt
Client key: / opt / kuin00601 / etcd client key
Switch between root and version determination
- First sudo -i switches to root
Then execute etcdctl --help| more to check whether the version starts with 3. If not, execute export ETCDCTL_API=3
[student@vms20 ~]$ sudo -i [root@vms20 ~]# etcdctl --help| more NAME: etcdctl - A simple command line client for etcd3. USAGE: etcdctl VERSION: 3.3.11 API VERSION: 3.3
backups
- We can use etcdctl snapshot save --help to view the help, put the required parameters into the text [if we can't remember the parameters], edit all the contents in the text, and finally delete those parameters, leaving only one line of code
- Finally, get the following line from b.txt and execute it
[root@vms20 ~]# cat b.txt etcdctl snapshot save /srv/data/etcd-snapshot.db --cert=" /opt/KUIN00601/etcd-client.crt" --key="/opt/KUIN00601/etcd-client.key" --cacert="/opt/KUIN00601/ca.crt" --endpoints=https://127.0.0.1:2379 etcdctl snapshot save - Fixed format for creating snapshots /srv/data/etcd-snapshot.db -Storage path --cert="Client certificate" --key="Client secret key" --cacert="ca certificate" --endpoints=Running address [root@vms20 ~]# [root@vms20 ~]# etcdctl snapshot save /srv/data/etcd-snapshot.db --cert=" /opt/KUIN00601/etcd-client.crt" --key="/opt/KUIN00601/etcd-client.key" --cacert="/opt/KUIN00601/ca.crt" --endpoints=https://127.0.0.1:2379 [root@vms20 ~]#
recovery
- We can use etcdctl snapshot restore --help to view the help, put the required parameters into the text [if we can't remember the parameters], edit all the contents in the text, and finally delete those parameters, leaving only one line of code
In addition, you need to obtain the path of – data dir first. The viewing method is as follows: [systemctl status etcd, cat view the startup Script]
- Stop etcd and delete the contents of the file [the – data dir path in the startup Script]
Then start preparing the recovery code in the text and executing it
[root@vms20 ~]# systemctl stop etcd [root@vms20 ~]# rm -rf /var/lib/etcd [root@vms20 ~]# vi c.txt [root@vms20 ~]# cat c.txt etcdctl snapshot restore /srv/data/etcd-snapshot-previous.db --cacert=/opt/KUIN00601/ca.crt --cert= /opt/KUIN00601/etcd-client.crt --key=/opt/KUIN00601/etcd-client.key --data-dir="/var/lib/etcd" --name="default" --initial-cluster="default=http://localhost:2380" /srv/data/etcd-snapshot-previous.db -Recovery address --data-dir="/var/lib/etcd" -Recovery path [in startup Script] --name="default" - Name [in startup Script] initial-cluster="default=http://localhost:2380 "- recover the path of the name. If the localhost reports an error, replace it with 127.0.0.1. The etcdctl member list is followed by three certificates. You can view the address of port 2380 [root@vms20 ~]# [root@vms20 ~]# etcdctl snapshot restore /srv/data/etcd-snapshot-previous.db --cacert=/opt/KUIN00601/ca.crt --cert= /opt/KUIN00601/etcd-client.crt --key=/opt/KUIN00601/etcd-client.key --data-dir="/var/lib/etcd" --name="default" --initial-cluster="default=http://localhost:2380" [root@vms20 ~]#
- Modify the group to which the file belongs and start etcd
[root@vms20 ~]# chown -R etcd.etcd /var/lib/etcd [root@vms20 ~]# systemctl start etcd
5.7% k8s√
Setting the configuration environment kubectl config use context k8s
Task
Create a new NetworkPolicy called allow port from namespace to allow the existing namespace my app
Pods in connect to port 9200 of other pods in the same namespace.
Ensure that the new NetworkPolicy:
access to pods not listening on port 9200 is not allowed
access to pods from namespace my app is not allowed
Remember: when doing this question, be sure to judge whether it is the rule of entry or the rule of exit
Copy yaml content of the official website document, enter the k8s cluster and paste it into the text
- Search the official website document for networkpolicy. After entering, find the networkpolicy resource and copy all the following codes
- After entering the cluster, enter the yaml file with any name and paste the code copied above
[student@vms20 ~]$ kubectl config use-context k8s Switched to context "k8s". [student@vms20 ~]$
Modify yaml code and generate pod [ingress rules]
- According to the requirements of the topic, we can know that this is the rule of ingress, and it is in the same ns, and there is no need to specify labels, so the contents related to labels and egress can be deleted
Then modify the name and NS name in the title, and finally get the following yaml file, and then apply to generate it. If there is no ns in the error, create it
[student@vms20 ~]$ cat 5.yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-port-from-namespace namespace: my-app spec: podSelector: matchLabels: policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: ports: - protocol: TCP port: 9200 [student@vms20 ~]$ kubectl apply -f 5.yaml Error from server (NotFound): error when creating "5.yaml": namespaces "my-app" not found [student@vms20 ~]$ kubectl create ns my-app namespace/my-app created [student@vms20 ~]$ kubectl apply -f 5.yaml networkpolicy.networking.k8s.io/allow-port-from-namespace created [student@vms20 ~]$
Test update questions [egress rules]
-
Setting the configuration environment kubectl config use context k8s
-
Task
-
Create an allow port from namespace in the internal namespace to ensure that the new NetworkPolicy is allowed
Allow Pods in namespace internal to connect to port 9200 in namespace big Corp. -
Ensure that the new NetworkPolicy:
-
Access to pods not listening on port 9200 is not allowed
-
Access to pods from namespace internal is not allowed
-
answer
Here, you should use progress instead of progress. If you want to connect from the internal to the ns of big Corp, it should be obvious that it is out!- As for file acquisition, it is the same as the above to be copied officially. I don't need to be cumbersome. Then, because this needs another ns space, you need to add namespaceSelector: and specify the label [this is in progress, and you can copy it to progress]
- Then, because we need to specify the tag, we need to view the tag of big Corp [if there is no ns, create it]
- The whole process is as follows
[student@vms20 ~]$ kubectl config use-context k8s Switched to context "k8s". [student@vms20 ~]$ [student@vms20 ~]$ kubectl get ns --show-labels | grep big-corp [student@vms20 ~]$ kubectl create ns big-corp namespace/big-corp created [student@vms20 ~]$ kubectl get ns --show-labels | grep big-corp big-corp Active 2s kubernetes.io/metadata.name=big-corp [student@vms20 ~]$ #The final configuration file is as follows. Modify the name and ns #Note that the copied label = should be changed to:, followed by a space ```bash [student@vms20 ~]$ #vi 5-2.yaml [student@vms20 ~]$ cat 5-2.yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-port-from-namespace namespace: internal spec: podSelector: matchLabels: policyTypes: - Egress egress: - to: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: big-corp ports: - protocol: TCP port: 9200 [student@vms20 ~]$ [student@vms20 ~]$ kubectl apply -f 5-2.yaml Error from server (NotFound): error when creating "5-2.yaml": namespaces "internal" not found [student@vms20 ~]$ kubectl create ns internal namespace/internal created [student@vms20 ~]$ kubectl apply -f 5-2.yaml networkpolicy.networking.k8s.io/allow-port-from-namespace created [student@vms20 ~]$
6.7% k8s√
Setting the configuration environment kubectl config use context k8s
Task
Please reconfigure the existing deployment front end and add a port specification named http to expose the end of the existing container nginx
Port 80/tcp.
Create a new service named front end SVC to expose the container port http.
Configure this service to expose individual pods through nodeports on scheduled nodes.
- Edit we can use Edit to edit
- Because the port specification named http is mentioned above, add a port named http, and then specify port 80 and tcp type
- If you can't remember the container port of port 80, you can search the official website for deployment and enter the configuration file to view it
- If you can't remember the tcp type portocol, you can search the official website for networkpolicy [question 5 above] and enter the configuration file to view it
[student@vms20 ~]$ kubectl config use-context k8s Switched to context "k8s". [student@vms20 ~]$ [student@vms20 ~]$ kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE cpupod 3/3 3 3 28d front-end 1/1 1 1 28d webserver 1/1 1 1 28d [student@vms20 ~]$ kubectl edit deployments front-end # Find the nginx image, about 42 lines # Add the following 43-46 lines, save and exit 41 name: nginx 42 resources: {} 43 ports: 44 - name: http 45 containerPort: 80 46 protocol: TCP 47 terminationMessagePath: /dev/termination-log #Save exit :wq "/tmp/kubectl-edit-943303888.yaml" 72L, 2433C written deployment.apps/front-end edited [student@vms20 ~]$
- Creating a service exposed port 80 is actually creating an svc
[student@vms20 ~]$ kubectl expose --name=front-end-svc deployment front-end --port=80 --type=NodePort service/front-end-svc exposed [student@vms20 ~]$ --name=front-end-svc -svc Name [title] deployment front-end --port=80 --deploy type deploy of pod Name this pod Port --type=NodePort -Type [title]
7.7% k8s√
Setting the configuration environment kubectl config use context k8s
Task
Create a new nginx ingress resource as follows:
Name: pong
namespace: ing-internal
Expose the service hello on the path / Hello using service port 5678
You can check the availability of the service hello with the following command, which returns Hello:
curl -kL < INTERNAL_IP>/hello/
Copy yaml content from official website documents and enter k8s cluster
- Because the title allows you to create an ingress resource, you can directly search for ingress on the official website. After you go in, you can find the yaml file content of the ingress resource and copy the code
- Enter the k8s cluster and paste the code in any yaml file
[student@vms20 ~]$ kubectl config use-context k8s Switched to context "k8s".
Edit yaml file and generate pod and test
- You need to add a line of namespace under name. After specifying the name and ns, you can modify the following path and path, and the following name and port
Final generation
[student@vms20 ~]$ vi 7.yaml [student@vms20 ~]$ cat 7.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: pong namespace: ing-internal annotations: # nginx.ingress.kubernetes.io/rewrite-target: / kubernetes.io/ingress.class: "nginx" spec: rules: - http: paths: - path: /hello pathType: Prefix backend: service: name: hello port: number: 5678 [student@vms20 ~]$ [student@vms20 ~]$ kubectl apply -f 7.yaml ingress.networking.k8s.io/pong created [student@vms20 ~]$ [student@vms20 ~]$ kubectl get ingress -n ing-internal NAME CLASS HOSTS ADDRESS PORTS AGE pong <none> * 80 24s
- test
Before testing, you need to check the ip ADDRESS of ingress. Use the following method. After a while, an ip ADDRESS will appear under ADDRESS
Then: curl - KL < International_ IP > / hello /, you can see the content, and then the problem is over
[student@vms20 ~]$ kubectl get ingress -n ing-internal NAME CLASS HOSTS ADDRESS PORTS AGE pong <none> * 192.168.26.23 80 66s [student@vms20 ~]$ curl -KL 192.168.26.23/hello/ hello [student@vms20 ~]$
8.4% k8s√
Setting the configuration environment kubectl config use context k8s
Task
Extending deployment from web server to 6pods
- Modify the command kubectl scale deployment webserver --replicas=6
[student@vms20 ~]$ kubectl config use-context k8s Switched to context "k8s". [student@vms20 ~]$ [student@vms20 ~]$ kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE cpupod 3/3 3 3 28d front-end 1/1 1 1 28d webserver 1/1 1 1 28d [student@vms20 ~]$ kubectl scale deployment webserver --replicas=6 deployment.apps/webserver scaled [student@vms20 ~]$ [student@vms20 ~]$ kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE cpupod 3/3 3 3 28d front-end 1/1 1 1 28d webserver 1/6 6 1 28d [student@vms20 ~]$ kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE cpupod 3/3 3 3 28d front-end 1/1 1 1 28d webserver 3/6 6 3 28d [student@vms20 ~]$ kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE cpupod 3/3 3 3 28d front-end 1/1 1 1 28d webserver 6/6 6 6 28d [student@vms20 ~]$
9.4% k8s√
Setting the configuration environment kubectl config use context k8s
Task
Schedule a pod as follows:
Name: nginx-kusc00401
image: nginx
Node selector: disk=ssd
- Use the command to generate it
kubectl run nginx-kusc00401 --image=nginx --dry-run=client -o yaml > 9.yaml
After generation, edit the configuration file, add a line of specified labels under the spec, and then create a pod, and the problem is over
[student@vms20 ~]$ kubectl config use-context k8s Switched to context "k8s". [student@vms20 ~]$ [student@vms20 ~]$ kubectl run nginx-kusc00401 --image=nginx --dry-run=client -o yaml > 9.yaml [student@vms20 ~]$ vi 9.yaml [student@vms20 ~]$ cat 9.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx-kusc00401 name: nginx-kusc00401 spec: nodeSelector: disk: ssd containers: - image: nginx name: nginx-kusc00401 resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {} [student@vms20 ~]$ [student@vms20 ~]$ kubectl apply -f 9.yaml pod/nginx-kusc00401 created [student@vms20 ~]$ kubectl get pods | grep kusc004 nginx-kusc00401 1/1 Running 0 5s [student@vms20 ~]$
- Add imagePullPolicy: IfNotPresent to the test environment, otherwise downloading the image is too slow
10.4% k8s√
Setting the configuration environment kubectl config use context k8s
Task
Check how many worker nodes are ready (excluding nodes marked Taint: NoSchedule) and count them
Write / opt / kusc00402 / kusc00402 txt
- This question is actually to check the node status, which is Ready and has no NoSchedule stain
[student@vms20 ~]$ kubectl config use-context k8s Switched to context "k8s". [student@vms20 ~]$ [student@vms20 ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION vms21.rhce.cc Ready control-plane,master 29d v1.22.2 vms22.rhce.cc Ready <none> 29d v1.22.2 vms23.rhce.cc Ready <none> 29d v1.22.2 [student@vms20 ~]$ #In fact, the above command can be seen. The following one is just for verification [student@vms20 ~]$ kubectl describe node vms21.rhce.cc | grep Taint Taints: node-role.kubernetes.io/master:NoSchedule [student@vms20 ~]$ kubectl describe node vms22.rhce.cc | grep Taint Taints: <none> [student@vms20 ~]$ kubectl describe node vms23.rhce.cc | grep Taint Taints: <none> [student@vms20 ~]$ # Write the quantity directly into the path in the title [student@vms20 ~]$ echo 2 > /opt/KUSC00402/kusc00402.txt -bash: /opt/KUSC00402/kusc00402.txt: insufficient privilege [student@vms20 ~]$
- The prompt permission is insufficient. The processing method is as follows:
Enter root and use: chmod -R o+w file to give permission
[student@vms20 ~]$ sudo -i [root@vms20 ~]# chmod -R o+w /opt/KUSC00402/ [root@vms20 ~]# exit Logout [student@vms20 ~]$ echo 2 > /opt/KUSC00402/kusc00402.txt [student@vms20 ~]$ [student@vms20 ~]$ cat /opt/KUSC00402/kusc00402.txt 2 [student@vms20 ~]$
11.4% k8s√
Setting the configuration environment kubectl config use context k8s
Task
Create a pod named kucc4, and run an app container separately for each of the following images in the pod
(there may be 1-4 images):
nginx+redis+memcached+consul
- After entering the k8s cluster, we can create a yaml file
[student@vms20 ~]$ kubectl config use-context k8s Switched to context "k8s". [student@vms20 ~]$ [student@vms20 ~]$ kubectl run kucc4 --image=nginx --dry-run=client -o yaml > 11.yaml [student@vms20 ~]$ cat 11.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: kucc4 name: kucc4 spec: containers: - image: nginx name: kucc4 resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {} [student@vms20 ~]$
- Then edit the yaml file and add the image content. [just add a few topics] image resources as a group
Because the image name is not specified, we can customize the name and finally get the following yaml file
[student@vms20 ~]$ cat 11.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: kucc4 name: kucc4 spec: containers: - image: nginx name: c1 resources: {} - image: redis name: c2 resources: {} - image: memcached name: c3 resources: {} - image: consul name: c4 resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {} [student@vms20 ~]$
- Then generate, all are run, that is, normal, and this problem ends
[student@vms20 ~]$ kubectl apply -f 11.yaml pod/kucc4 created [student@vms20 ~]$ kubectl get pods | grep kucc4 kucc4 4/4 Running 0 10s [student@vms20 ~]$
- The test environment needs to add: imagePullPolicy: IfNotPresent
★12.4% hk8s k8s√
Setting the configuration environment kubectl config use context k8s kubectl config use context hk8s
Task
Create a persistent volume named app data, with a capacity of 1Gi and an access mode of ReadWriteMany. volume
The type is hostPath, located in / SRV / app data
Copy yaml content from official website documents and enter k8s cluster
- Search persist on the official website, pull down, find the persistent volume and copy the code
- Enter the k8s cluster and paste the code in any yaml file
[student@vms20 ~]$ kubectl config use-context k8s Switched to context "k8s". [student@vms20 ~]$ [student@vms20 ~]$ vi 12.yaml [student@vms20 ~]$ cat 12.yaml apiVersion: v1 kind: PersistentVolume metadata: name: pv0003 spec: capacity: storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Recycle storageClassName: slow mountOptions: - hard - nfsvers=4.1 nfs: path: /tmp server: 172.17.0.2 [student@vms20 ~]$
Edit yaml file and generate
- According to the requirements of the topic, the following yaml files are obtained
1. Modify name
2. Modify storage
3. Modify the accessmodes option
4. Modify the nfs type to hostpath and specify the path
5. Delete redundant items
[student@vms20 ~]$ cat 12.yaml apiVersion: v1 kind: PersistentVolume metadata: name: app-data spec: capacity: storage: 1Gi volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Recycle hostPath: path: /srv/app-data [student@vms20 ~]$ [student@vms20 ~]$ kubectl apply -f 12.yaml persistentvolume/app-data created [student@vms20 ~]$ [student@vms20 ~]$ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE app-data 1Gi RWX Recycle Available 16s [student@vms20 ~]$
★13.7% ok8s k8s√
Setting the configuration environment kubectl config use context k8s ok8s
Task
Create a new PersistentVolumeClaim:
Name: pvvolume
class: csi-hostpath-sc
capacity: 10Mi
Create a new pod that will be mounted to the PersistentVolumeClaim as volume:
Name: Web server
image: nginx
Mount path: / usr/share/nginx/html
Configure a new pod to have ReadWriteOnce permission on volume.
Finally, use kubectl edit or kubectl patch to expand the capacity of the PersistentVolumeClaim to 70Mi, and
Record this change.
Copy yaml content from official website documents and enter k8s cluster
- Search the official website for persist [and 12 questions on one page], pull down, find persist... And copy the code
- Enter the k8s cluster and paste the code in any yaml file
[student@vms20 ~]$ kubectl config use-context k8s Switched to context "k8s". [student@vms20 ~]$ [student@vms20 ~]$ vi 13.yaml [student@vms20 ~]$ cat 13.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: myclaim spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 8Gi storageClassName: slow selector: matchLabels: release: "stable" matchExpressions: - {key: environment, operator: In, values: [dev]} [student@vms20 ~]$
Edit yaml file and generate
- According to the requirements of the topic, the following yaml files are obtained
- 1. Modify name
- 2. Modify access modes [create a pod below and specify it as readwriteonce
- 3. Modify size
- 4. Specify classname
- 5. Delete the unused items below
[student@vms20 ~]$ vi 13.yaml [student@vms20 ~]$ cat 13.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvvolume spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 10Mi storageClassName: csi-hostpath-sc [student@vms20 ~]$ [student@vms20 ~]$ kubectl apply -f 13.yaml persistentvolumeclaim/pvvolume created [student@vms20 ~]$ [student@vms20 ~]$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvvolume Bound pvc-c3ca5b13-206e-43cc-8982-983f4cc8fbe7 10Mi RWO csi-hostpath-sc 111s [student@vms20 ~]$ [student@vms20 ~]$ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE app-data 1Gi RWX Recycle Available 8m25s pvc-c3ca5b13-206e-43cc-8982-983f4cc8fbe7 10Mi RWO Delete Bound default/pvvolume csi-hostpath-sc 5s [student@vms20 ~]$
Create a pod and view it
- Continue to pull down the official website document interface entered above, and then copy the yaml content using the application as the volume
- vi any yaml file and edit it to get the following yaml file
This is more intuitive. You don't need to delete the code. You just need to modify a few values- 1. Three name s are changed to web server
- 2. Modify mountpath
- 3. Modify the cliemname to the PersistentVolumeClaim name created above
[student@vms20 ~]$ cat 13-2.yaml apiVersion: v1 kind: Pod metadata: name: web-server spec: containers: - name: myfrontend image: nginx volumeMounts: - mountPath: "/usr/share/nginx/html" name: web-server volumes: - name: web-server persistentVolumeClaim: claimName: pvvolume [student@vms20 ~]$ [student@vms20 ~]$ kubectl apply -f 13-2.yaml pod/web-server created [student@vms20 ~]$ [student@vms20 ~]$ kubectl get pods | grep web-server web-server 1/1 Running 0 71s [student@vms20 ~]$
- The test environment needs to add: imagePullPolicy: IfNotPresent
edit modify volume size
- The title says and records the change, n so finally -- record must be added
There are two 10Mi points in the back, which need to be changed to 70Mi, and then save and exit
[student@vms20 ~]$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvvolume Bound pvc-c3ca5b13-206e-43cc-8982-983f4cc8fbe7 10Mi RWO csi-hostpath-sc 34m [student@vms20 ~]$ kubectl edit pvc pvvolume --record [student@vms20 ~]$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvvolume Bound pvc-c3ca5b13-206e-43cc-8982-983f4cc8fbe7 70Mi RWO csi-hostpath-sc 37m
14.5% k8s √
Setting the configuration environment kubectl config use context k8s
Task
Monitor the log of pod foo and:
extract the log line corresponding to the error unable to access website
write these log lines to / opt/KUTR00101/foo
- This is simple. Go directly to this directory
[student@vms20 ~]$ kubectl config use-context k8s Switched to context "k8s". [student@vms20 ~]$ [root@vms20 ~]# kubectl logs foo | grep unable-to-access-website > /opt/KUTR00101/foo [root@vms20 ~]# [student@vms20 ~]$ cat /opt/KUTR00101/foo unable-to-access-website unable-to-access-website unable-to-access-website unable-to-access-website unable-to-access-website [student@vms20 ~]$
- If the prompt permission is insufficient, the processing method is as follows
Enter root and use: chmod -R o+w file to give permission
15.7% k8s
Setting the configuration environment kubectl config use context k8s
Context
Without changing its existing container, you need to integrate an existing pod into kubernetes' built-in logging
Architecture (such as kubectl logs). Adding a streamimg sidecar container is a good way to achieve this requirement.
Task
Add a busybox sidecar container to the existing pod legacy app. The new sidecar container must be run
Order:
/bin/sh -c tail -n+1 -f /var/log/legacy-app.log
Use volume mount named logs to make the file / var / log / legacy app Log can be used for sidecar containers.
Do not change an existing container. Do not modify the path of the log file. The two containers must pass / var / log / legacy app Log come
Access the file.
yaml file preparation
The pod cannot be modified online because the title says that the contents of the container cannot be changed, so we can export the pod information as a yaml file [preferably export two copies and one backup], then delete the pod, edit the yaml file and regenerate it later
[student@vms20 ~]$ kubectl get pods legacy-app -o yaml > 15.yaml [student@vms20 ~]$ kubectl get pods legacy-app -o yaml > 15.yaml.bak [student@vms20 ~]$ [student@vms20 ~]$ kubectl delete pod legacy-app pod "legacy-app" deleted [student@vms20 ~]$
Define volumes and add new pod s
-
vi edit the configuration file exported above
Find volumes and add topic requirements: -
Use volume mount named logs to make the file / var / log / legacy app Log can be used for sidecar containers.
You can search sidecar on the official website. There are methods to define volume mount, where name is the name.
-
Add a busybox sidecar container to the existing pod legacy app. The new sidecar container must be run
Order:
/bin/sh -c tail -n+1 -f /var/log/legacy-app.log -
The following three new contents have been added [note the new line number in the note]
# Define a log volume first # Added lines 52 and 53 51 volumes: 52 - name: logs #Title required name 53 emptyDir: {} #Define a default type [title not specified] 54 - name: kube-api-access-l8xjt 55 projected: # Mount to / var/log / [this is the path] # Added 28 and 29 lines 27 volumeMounts: 28 - name: logs 29 mountPath: /var/log/ 30 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount # Add pod information according to topic requirements #Added lines 33-38, - name must be aligned with - env 32 readOnly: true 33 - name : busybox #The title is not specified, so you can customize the name 34 image: busybox 35 command: ["sh","-c","tail -n+1 -f /var/log/legacy-app.log"] #Statement to execute 36 volumeMounts: #Mount volume 37 - name: logs 38 mountPath: /var/log/ 39 dnsPolicy: ClusterFirst
Create a pod and view it
- Normally, the legacy app will have two numbers of pod s in running status
[student@vms20 ~]$ kubectl get pods | grep app [student@vms20 ~]$ [student@vms20 ~]$ kubectl apply -f 15.yaml pod/legacy-app created [student@vms20 ~]$ [student@vms20 ~]$ kubectl get pods | grep app legacy-app 0/2 ContainerCreating 0 3s [student@vms20 ~]$ kubectl get pods | grep app legacy-app 2/2 Running 0 7s [student@vms20 ~]$
16.5% k8s√
Setting the configuration environment kubectl config use context k8s
Task
Through pod label name = CPU user, find the pod that occupies a large amount of CPU at runtime, and the pod that occupies the highest CPU
Write name to file / opt / kutr000401 / kutr00401 Txt (already exists)
- This is relatively simple. Just add the - l parameter to the top statement, followed by the label information
[student@vms20 ~]$ kubectl config use-context k8s Switched to context "k8s". [student@vms20 ~]$ [student@vms20 ~]$ kubectl top pods -l name=cpu-user NAME CPU(cores) MEMORY(bytes) cpupod-85cbdc7c87-bp5c7 9m 8Mi cpupod-85cbdc7c87-gvl6l 4m 3Mi cpupod-85cbdc7c87-l8w4s 1m 3Mi [student@vms20 ~]$ [student@vms20 ~]$ echo cpupod-85cbdc7c87-bp5c7 > /opt/KUTR000401/KUTR00401.txt [student@vms20 ~]$ [student@vms20 ~]$ cat /opt/KUTR000401/KUTR00401.txt cpupod-85cbdc7c87-bp5c7 [student@vms20 ~]$
★17.13% wk8s ek8s
Setting configuration environment kubectl config use context ek8s wk8s
Task
kubernetes worker node named wk8s-node-0 (vms26.rhce.cc is used for practice environment) is Not Ready
State. Investigate the cause of this situation, and take corresponding measures to restore the node to Ready state to ensure that all actions are taken
Any change shall take effect permanently.
You can ssh to a failed node using the following command:
ssh wk8s-node-0 (vms26.rhce.cc)
You can use the following command to obtain higher permissions on this node:
sudo -i
- This question is a sub question, which is super simple. There is only one reason for this, that is, the kubelet service is not started, so we just need to go to wk8s-node-0 node to switch to root, start the kubelet service and add it to the startup self startup [the change will take effect permanently]
[student@vms20 ~]$ kubectl config use-context ek8s Switched to context "ek8s". [student@vms20 ~]$ [student@vms20 ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION vms24.rhce.cc Ready control-plane,master 32d v1.22.2 vms25.rhce.cc Ready,SchedulingDisabled <none> 32d v1.22.2 vms26.rhce.cc NotReady <none> 32d v1.22.2 [student@vms20 ~]$ [student@vms20 ~]$ ssh vms26.rhce.cc Warning: Permanently added 'vms26.rhce.cc' (ECDSA) to the list of known hosts. student@vms26:~$ student@vms26:~$ sudo -i root@vms26:~# root@vms26:~# systemctl is-active kubelet inactive root@vms26:~# root@vms26:~# systemctl start kubelet root@vms26:~# root@vms26:~# systemctl enable kubelet Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service. root@vms26:~# root@vms26:~# systemctl is-active kubelet active root@vms26:~#
- Now go back to the wk8s cluster and check the status again. You can see that the status is normal
root@vms26:~# exit logout student@vms26:~$ exit logout [student@vms20 ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION vms24.rhce.cc Ready control-plane,master 32d v1.22.2 vms25.rhce.cc Ready,SchedulingDisabled <none> 32d v1.22.2 vms26.rhce.cc Ready <none> 32d v1.22.2 [student@vms20 ~]$
- If the kubelet service has been started during the exam, but the status is still Not Ready, please follow the steps below
Execute mount | grep lxcfs and all umount s starting with / proc
summary
- Has landed, anyway cka alone in terms of examination, the difficulty is not particularly high;
- However, I still have to spend a lot of effort on learning cka content [after all, cka is a subject whose training fee is more expensive than the examination fee. I spent 4 months learning and taking notes on all cka content. Learning this is relatively cumbersome and tiring, but what I learned is my own, isn't it? It doesn't matter if I'm tired.]