Play k8s: data storage

1 Chapter 8 Data Storage

As mentioned earlier, the life cycle of the container may be very short and will be created and destroyed frequently. Then when the container is destroyed, the data stored in the container will also be cleared. This result is undesirable for users in some cases. In order to persist the data of the container, kubernetes introduces the concept of Volume.

Volume is a shared directory in a Pod that can be accessed by multiple containers. It is defined on a Pod, and then mounted to a specific file directory by multiple containers in a Pod. kubernetes uses Volume to realize the connection between different containers in the same Pod. Data sharing among users and persistent storage of data. The life container of the Volume is not related to the life cycle of a single container in the Pod. When the container is terminated or restarted, the data in the Volume will not be lost.

The Volume of kubernetes supports multiple types, and the more common ones are as follows:

  • Simple storage: EmptyDir, HostPath, NFS
  • Advanced storage: PV, PVC
  • Configuration storage: ConfigMap, Secret

1.1 Basic storage

1.1.1 EmptyDir

EmptyDir is the most basic Volume type, and an EmptyDir is an empty directory on the Host.

EmptyDir is created when Pod is assigned to Node. Its initial content is empty, and there is no need to specify the corresponding directory file on the host, because kubernetes will automatically allocate a directory. When Pod is destroyed, the data in EmptyDir will also be deleted. delete permanently. EmptyDir is used as follows:

  • Temporary space, such as a temporary directory that is used when some applications are running, and does not need to be kept permanently
  • The directory where one container needs to get data from another container (multi-container shared directory)

Next, use EmptyDir through a case of file sharing between containers.

Prepare two containers nginx and busybox in a Pod, and then declare a Volume to hang in the directories of the two containers respectively, and then the nginx container is responsible for writing logs to the Volume, and the busybox reads the log content to the console through commands.

Create a volume-emptydir.yaml

apiVersion: v1
kind: Pod
metadata:
  name: volume-emptydir
  namespace: dev
spec:
  containers:
  - name: nginx
    image: nginx:1.17.1
    ports:
    - containerPort: 80
    volumeMounts:  # Hang the logs-volume in the nginx container, the corresponding directory is /var/log/nginx
    - name: logs-volume
      mountPath: /var/log/nginx
  - name: busybox
    image: busybox:1.30
    command: ["/bin/sh","-c","tail -f /logs/access.log"] # Initial command, dynamically read the contents of the specified file
    volumeMounts:  # Hang the logs-volume in the busybox container, and the corresponding directory is /logs
    - name: logs-volume
      mountPath: /logs
  volumes: # Declare volume, name is logs-volume, type is emptyDir
  - name: logs-volume
    emptyDir: {}

 

# Create pods
[root@k8s-master01 ~]# kubectl create -f volume-emptydir.yaml
pod/volume-emptydir created
​
# view pod s
[root@k8s-master01 ~]# kubectl get pods volume-emptydir -n dev -o wide
NAME                  READY   STATUS    RESTARTS   AGE      IP       NODE   ...... 
volume-emptydir       2/2     Running   0          97s   10.42.2.9   node1  ......
​
# Access nginx through podIp
[root@k8s-master01 ~]# curl 10.42.2.9
......
​
# View the standard output of the specified container through the kubectl logs command
[root@k8s-master01 ~]# kubectl logs -f volume-emptydir -n dev -c busybox
10.42.1.0 - - [27/Jun/2021:15:08:54 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"

1.1.2 HostPath

As mentioned in the previous lesson, the data in EmptyDir will not be persisted, and it will be destroyed with the end of the Pod. If you want to simply persist the data to the host, you can choose HostPath.

HostPath is to hang an actual directory in the Node host to the Pod for use by the container. This design can ensure that the Pod is destroyed, but the data basis can exist on the Node host.

Create a volume-hostpath.yaml:

apiVersion: v1
kind: Pod
metadata:
  name: volume-hostpath
  namespace: dev
spec:
  containers:
  - name: nginx
    image: nginx:1.17.1
    ports:
    - containerPort: 80
    volumeMounts:
    - name: logs-volume
      mountPath: /var/log/nginx
  - name: busybox
    image: busybox:1.30
    command: ["/bin/sh","-c","tail -f /logs/access.log"]
    volumeMounts:
    - name: logs-volume
      mountPath: /logs
  volumes:
  - name: logs-volume
    hostPath: 
      path: /root/logs
      type: DirectoryOrCreate  # If the directory exists, use it; if it does not exist, create it first and then use it
about type A little explanation of the value of :
    DirectoryOrCreate If the directory exists, use it; if it does not exist, create it first and then use it
    Directory   directory must exist
    FileOrCreate  If the file exists, use it; if it does not exist, create it first and then use it
    File file must exist 
    Socket  unix socket must exist
    CharDevice  The character device must exist
    BlockDevice The block device must exist
# Create pods
[root@k8s-master01 ~]# kubectl create -f volume-hostpath.yaml
pod/volume-hostpath created
​
# View Pod s
[root@k8s-master01 ~]# kubectl get pods volume-hostpath -n dev -o wide
NAME                  READY   STATUS    RESTARTS   AGE   IP             NODE   ......
pod-volume-hostpath   2/2     Running   0          16s   10.42.2.10     node1  ......
​
#visit nginx
[root@k8s-master01 ~]# curl 10.42.2.10
​
# Next, you can go to the /root/logs directory of the host to view the stored files.
###  Note: The following operations need to be run on the node where the Pod is located (node1 in this case)
[root@node1 ~]# ls /root/logs/
access.log  error.log
​
# For the same reason, if you create a file in this directory, you can see it in the container

1.1.3 NFS

HostPath can solve the problem of data persistence, but once the Node node fails, if the Pod is transferred to another node, problems will arise again. At this time, a separate network storage system needs to be prepared, and NFS and CIFS are commonly used.

NFS is a network file storage system. You can build an NFS server, and then directly connect the storage in the Pod to the NFS system. In this way, no matter how the Pod is transferred on the node, as long as the connection between the Node and NFS is normal, the data will be saved. can be accessed successfully.

1) First, prepare the nfs server. Here, for simplicity, the master node is directly used as the nfs server

# Install nfs service on nfs
[root@nfs ~]# yum install nfs-utils -y
​
# Prepare a shared directory
[root@nfs ~]# mkdir /root/data/nfs -pv
​
# Expose the shared directory to all hosts in the 192.168.5.0/24 network segment with read and write permissions
[root@nfs ~]# vim /etc/exports
[root@nfs ~]# more /etc/exports
/root/data/nfs     192.168.5.0/24(rw,no_root_squash)
​
# start nfs service
[root@nfs ~]# systemctl restart nfs

2) Next, install nfs on each node node, so that the node node can drive the nfs device

# Install the nfs service on node, note that it does not need to be started
[root@k8s-master01 ~]# yum install nfs-utils -y

3) Next, you can write the pod configuration file and create volume-nfs.yaml

apiVersion: v1
kind: Pod
metadata:
  name: volume-nfs
  namespace: dev
spec:
  containers:
  - name: nginx
    image: nginx:1.17.1
    ports:
    - containerPort: 80
    volumeMounts:
    - name: logs-volume
      mountPath: /var/log/nginx
  - name: busybox
    image: busybox:1.30
    command: ["/bin/sh","-c","tail -f /logs/access.log"] 
    volumeMounts:
    - name: logs-volume
      mountPath: /logs
  volumes:
  - name: logs-volume
    nfs:
      server: 192.168.5.6  #nfs server address
      path: /root/data/nfs #shared file path

4) Finally, run the pod and observe the results

# Create pod s
[root@k8s-master01 ~]# kubectl create -f volume-nfs.yaml
pod/volume-nfs created

# view pod s
[root@k8s-master01 ~]# kubectl get pods volume-nfs -n dev
NAME                  READY   STATUS    RESTARTS   AGE
volume-nfs        2/2     Running   0          2m9s

# Check the shared directory on the nfs server and find that there are already files
[root@k8s-master01 ~]# ls /root/data/
access.log  error.log

1.2 Advanced Storage

We have already learned how to use NFS to provide storage. At this time, users are required to build an NFS system and configure nfs in yaml. Since there are many storage systems supported by kubernetes, it is obviously unrealistic to require customers to master all of them. In order to shield the details of the underlying storage implementation and facilitate users, kubernetes introduces PV and PVC resource objects.

PV (Persistent Volume) means persistent volume, which is an abstraction of the underlying shared storage. In general, PV is created and configured by the kubernetes administrator, which is related to the underlying specific shared storage technology, and the docking with shared storage is completed through plug-ins.

PVC (Persistent Volume Claim) means a persistent volume claim, which is a declaration of the user's storage requirements. In other words, PVC is actually a resource request application sent by the user to the kubernetes system.

After using PV and PVC, the work can be further subdivided:

  • Storage: Maintenance by storage engineers
  • PV: kubernetes administrator maintenance
  • PVC: kubernetes user maintenance

1.2.1 PV

PV is an abstraction of storage resources, the following is the resource manifest file:

apiVersion: v1  
kind: PersistentVolume
metadata:
  name: pv2
spec:
  nfs: # Storage type, corresponding to the underlying real storage
  capacity:  # Storage capacity, currently only supports the setting of storage space
    storage: 2Gi
  accessModes:  # access mode
  storageClassName: # storage class
  persistentVolumeReclaimPolicy: # recycling strategy

Description of key configuration parameters of PV:

  • storage type

The type of underlying actual storage, kubernetes supports multiple storage types, and the configuration of each storage type is different

  • Storage capacity (capacity)

Currently only supports the setting of storage space (storage=1Gi), but the configuration of indicators such as IOPS and throughput may be added in the future

  • Access Modes (accessModes)

It is used to describe the access rights of user applications to storage resources. The access rights include the following methods:

ReadWriteOnce (RWO): read and write permissions, but can only be mounted by a single node

ReadOnlyMany (ROX): read-only permission, can be mounted by multiple nodes

ReadWriteMany (RWX): read and write permissions, can be mounted by multiple nodes

It should be noted that different underlying storage types may support different access modes

  • Reclaim Policy (persistentVolumeReclaimPolicy)

When the PV is no longer used, how to deal with it. Three strategies are currently supported:

Retain (Retain) Retain data, requiring administrators to manually clean up data

Recycle (recycle) Clear the data in the PV, the effect is equivalent to executing rm -rf /thevolume/*

Delete (Delete) The back-end storage connected to the PV completes the deletion of the volume. Of course, this is common in the storage services of cloud service providers

It should be noted that different underlying storage types may support different recovery strategies

  • storage class

PV can specify a storage class through the storageClassName parameter

A PV with a specific category can only be bound to a PVC that has requested that category

A PV without a category can only be bound to a PVC that does not request any category

  • status

In the life cycle of a PV, it may be in 4 different stages:

Available (available): Indicates that it is available and has not been bound by any PVC

Bound (bound): Indicates that the PV has been bound by the PVC

Released: Indicates that the PVC was deleted, but the resource has not yet been reclaimed by the cluster

Failed: Indicates that the automatic reclamation of the PV failed

experiment

Use NFS as storage to demonstrate the use of PV, and create 3 PVs, corresponding to the 3 exposed paths in NFS.

1) Prepare the NFS environment

# Create a directory
[root@nfs ~]# mkdir /root/data/{pv1,pv2,pv3} -pv

# exposed service
[root@nfs ~]# more /etc/exports
/root/data/pv1     192.168.5.0/24(rw,no_root_squash)
/root/data/pv2     192.168.5.0/24(rw,no_root_squash)
/root/data/pv3     192.168.5.0/24(rw,no_root_squash)

# restart service
[root@nfs ~]#  systemctl restart nfs

2) Create pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name:  pv1
spec:
  capacity: 
    storage: 1Gi
  accessModes:
  - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /root/data/pv1
    server: 192.168.5.6

---

apiVersion: v1
kind: PersistentVolume
metadata:
  name:  pv2
spec:
  capacity: 
    storage: 2Gi
  accessModes:
  - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /root/data/pv2
    server: 192.168.5.6
    
---

apiVersion: v1
kind: PersistentVolume
metadata:
  name:  pv3
spec:
  capacity: 
    storage: 3Gi
  accessModes:
  - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /root/data/pv3
    server: 192.168.5.6
# create pv
[root@k8s-master01 ~]# kubectl create -f pv.yaml
persistentvolume/pv1 created
persistentvolume/pv2 created
persistentvolume/pv3 created

# view pv
[root@k8s-master01 ~]# kubectl get pv -o wide
NAME   CAPACITY   ACCESS MODES  RECLAIM POLICY  STATUS      AGE   VOLUMEMODE
pv1    1Gi        RWX            Retain        Available    10s   Filesystem
pv2    2Gi        RWX            Retain        Available    10s   Filesystem
pv3    3Gi        RWX            Retain        Available    9s    Filesystem

1.2.2 PVC

PVC is an application for resources, which is used to declare demand information for storage space, access mode, and storage category. Here is the resource manifest file:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc
  namespace: dev
spec:
  accessModes: # access mode
  selector: # Using tags for PV selection
  storageClassName: # storage class
  resources: # request space
    requests:
      storage: 5Gi

Description of key configuration parameters of PVC:

Access Modes (accessModes)

Used to describe the access permissions of user applications to storage resources

Selection criteria (selector)

Through the setting of the Label Selector, the PVC can be screened for the existing PV in the system

storage class (storageClassName)

When defining a PVC, you can set the required back-end storage category. Only PVs with this class can be selected by the system.

Resource request (Resources)

Describes a request for a storage resource

experiment

1) Create pvc.yaml and apply for pv

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc1
  namespace: dev
spec:
  accessModes: 
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc2
  namespace: dev
spec:
  accessModes: 
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc3
  namespace: dev
spec:
  accessModes: 
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
# create pvc
[root@k8s-master01 ~]# kubectl create -f pvc.yaml
persistentvolumeclaim/pvc1 created
persistentvolumeclaim/pvc2 created
persistentvolumeclaim/pvc3 created

# view pvc
[root@k8s-master01 ~]# kubectl get pvc  -n dev -o wide
NAME   STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE   VOLUMEMODE
pvc1   Bound    pv1      1Gi        RWX                           15s   Filesystem
pvc2   Bound    pv2      2Gi        RWX                           15s   Filesystem
pvc3   Bound    pv3      3Gi        RWX                           15s   Filesystem

# view pv
[root@k8s-master01 ~]# kubectl get pv -o wide
NAME  CAPACITY ACCESS MODES  RECLAIM POLICY  STATUS    CLAIM       AGE     VOLUMEMODE
pv1    1Gi        RWx        Retain          Bound    dev/pvc1    3h37m    Filesystem
pv2    2Gi        RWX        Retain          Bound    dev/pvc2    3h37m    Filesystem
pv3    3Gi        RWX        Retain          Bound    dev/pvc3    3h37m    Filesystem   

2) Create pods.yaml, use pv

apiVersion: v1
kind: Pod
metadata:
  name: pod1
  namespace: dev
spec:
  containers:
  - name: busybox
    image: busybox:1.30
    command: ["/bin/sh","-c","while true;do echo pod1 >> /root/out.txt; sleep 10; done;"]
    volumeMounts:
    - name: volume
      mountPath: /root/
  volumes:
    - name: volume
      persistentVolumeClaim:
        claimName: pvc1
        readOnly: false
---
apiVersion: v1
kind: Pod
metadata:
  name: pod2
  namespace: dev
spec:
  containers:
  - name: busybox
    image: busybox:1.30
    command: ["/bin/sh","-c","while true;do echo pod2 >> /root/out.txt; sleep 10; done;"]
    volumeMounts:
    - name: volume
      mountPath: /root/
  volumes:
    - name: volume
      persistentVolumeClaim:
        claimName: pvc2
# Create pod s
[root@k8s-master01 ~]# kubectl create -f pods.yaml
pod/pod1 created
pod/pod2 created

# view pod s
[root@k8s-master01 ~]# kubectl get pods -n dev -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP            NODE   
pod1   1/1     Running   0          14s   10.244.1.69   node1   
pod2   1/1     Running   0          14s   10.244.1.70   node1  

# view pvc
[root@k8s-master01 ~]# kubectl get pvc -n dev -o wide
NAME   STATUS   VOLUME   CAPACITY   ACCESS MODES      AGE   VOLUMEMODE
pvc1   Bound    pv1      1Gi        RWX               94m   Filesystem
pvc2   Bound    pv2      2Gi        RWX               94m   Filesystem
pvc3   Bound    pv3      3Gi        RWX               94m   Filesystem

# view pv
[root@k8s-master01 ~]# kubectl get pv -n dev -o wide
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM       AGE     VOLUMEMODE
pv1    1Gi        RWX            Retain           Bound    dev/pvc1    5h11m   Filesystem
pv2    2Gi        RWX            Retain           Bound    dev/pvc2    5h11m   Filesystem
pv3    3Gi        RWX            Retain           Bound    dev/pvc3    5h11m   Filesystem

# View file storage in nfs
[root@nfs ~]# more /root/data/pv1/out.txt
node1
node1
[root@nfs ~]# more /root/data/pv2/out.txt
node2

1.2.3 Life cycle

There is a one-to-one correspondence between PVC and PV, and the interaction between PV and PVC follows the following life cycle:

Resource provisioning: administrators manually create underlying storage and PV s

Resource binding: the user creates a PVC, and kubernetes is responsible for finding and binding the PV according to the declaration of the PVC

After the user defines a PVC, the system will select a satisfying condition among the existing PVs according to the PVC's request for storage resources.

Once found, bind the PV to a user-defined PVC, and the user's application can use this PVC

If not found, the PVC will be in the Pending state indefinitely until the system administrator creates a PV that meets its requirements

Once a PV is bound to a PVC, it will be exclusively occupied by this PVC and cannot be bound to other PVCs.

Resource usage: users can use pvc in the pod like a volume

Pod uses the definition of Volume to mount PVC to a certain path in the container for use.

Resource release: the user deletes pvc to release pv

When the storage resource is used up, the user can delete the PVC, and the PV bound to the PVC will be marked as "released", but it cannot be bound to other PVCs immediately. Data written through a previous PVC may still be left on the storage device, and the PV can only be used again after being cleared.

Resource recycling: kubernetes recycles resources according to the recycling policy set by pv

For PV, the administrator can set recycling policy, which is used to set how to deal with the remaining data after the PVC bound to it releases resources. Only when the storage space of PV is reclaimed can it be bound and used by new PVC s

1.3 Configuration storage

1.3.1 ConfigMap

ConfigMap is a special storage volume, its main function is to store configuration information.

Create configmap.yaml with the following content:

apiVersion: v1
kind: ConfigMap
metadata:
  name: configmap
  namespace: dev
data:
  info: |
    username:admin
    password:123456

Next, create a configmap with this config file

# create configmap
[root@k8s-master01 ~]# kubectl create -f configmap.yaml
configmap/configmap created

# View configmap details
[root@k8s-master01 ~]# kubectl describe cm configmap -n dev
Name:         configmap
Namespace:    dev
Labels:       <none>
Annotations:  <none>

Data
====
info:
----
username:admin
password:123456

Events:  <none>

Next, create a pod-configmap.yaml and mount the configmap created above

apiVersion: v1
kind: Pod
metadata:
  name: pod-configmap
  namespace: dev
spec:
  containers:
  - name: nginx
    image: nginx:1.17.1
    volumeMounts: # Mount configmap to directory
    - name: config
      mountPath: /configmap/config
  volumes: # quote configmap
  - name: config
    configMap:
      name: configmap
# Create pod s
[root@k8s-master01 ~]# kubectl create -f pod-configmap.yaml
pod/pod-configmap created

# view pod s
[root@k8s-master01 ~]# kubectl get pod pod-configmap -n dev
NAME            READY   STATUS    RESTARTS   AGE
pod-configmap   1/1     Running   0          6s

#into the container
[root@k8s-master01 ~]# kubectl exec -it pod-configmap -n dev /bin/sh
# cd /configmap/config/
# ls
info
# more info
username:admin
password:123456

# You can see that the mapping has been successful, and each configmap is mapped into a directory
# key--->file value---->content in the file
# At this time, if the content of the configmap is updated, the value in the container will also be dynamically updated

1.3.2 Secret

In kubernetes, there is also an object very similar to ConfigMap, called Secret object. It is mainly used to store sensitive information such as passwords, keys, certificates, etc.

1) First use base64 to encode the data

[root@k8s-master01 ~]# echo -n 'admin' | base64 #prepare username
YWRtaW4=
[root@k8s-master01 ~]# echo -n '123456' | base64 #prepare password
MTIzNDU2

2) Next, write secret.yaml and create a Secret

apiVersion: v1
kind: Secret
metadata:
  name: secret
  namespace: dev
type: Opaque
data:
  username: YWRtaW4=
  password: MTIzNDU2
# create secret
[root@k8s-master01 ~]# kubectl create -f secret.yaml
secret/secret created

# View secret details
[root@k8s-master01 ~]# kubectl describe secret secret -n dev
Name:         secret
Namespace:    dev
Labels:       <none>
Annotations:  <none>
Type:  Opaque
Data
====
password:  6 bytes
username:  5 bytes

3) Create pod-secret.yaml and mount the secret created above:

apiVersion: v1
kind: Pod
metadata:
  name: pod-secret
  namespace: dev
spec:
  containers:
  - name: nginx
    image: nginx:1.17.1
    volumeMounts: # Mount the secret to the directory
    - name: config
      mountPath: /secret/config
  volumes:
  - name: config
    secret:
      secretName: secret
# Create pod s
[root@k8s-master01 ~]# kubectl create -f pod-secret.yaml
pod/pod-secret created

# view pod s
[root@k8s-master01 ~]# kubectl get pod pod-secret -n dev
NAME            READY   STATUS    RESTARTS   AGE
pod-secret      1/1     Running   0          2m28s

# Enter the container, view the secret information, and find that it has been automatically decoded
[root@k8s-master01 ~]# kubectl exec -it pod-secret /bin/sh -n dev
/ # ls /secret/config/
password  username
/ # more /secret/config/username
admin
/ # more /secret/config/password
123456

So far, the encoding of information using secret has been realized.

Tags: Kubernetes

Posted by jasper182 on Mon, 20 Mar 2023 06:27:12 +1030