1. PV/PVC Redis
Redis data persistence
It provides many different levels of persistence methods: RDB and AOF
RDB persistence can generate a point in time snapshot of a dataset within a specified time interval.
Aof persistence records all write operation commands executed by the server, and restores the data set by re executing these commands when the server is started. All commands in the AOF file are saved in the format of redis protocol, and new commands will be appended to the end of the file. Redis can also rewrite the AOF file in the background so that the volume of the AOF file does not exceed the actual size required to save the data set state. Redis can also use both AOF persistence and RDB persistence. In this case, when redis restarts, it will preferentially use the AOF file to restore the data set, because the data set saved by the AOF file is usually more complete than the data set saved by the RDB file. You can even turn off persistence so that data only exists when the server is running.
2. Redis image production
Redis Dockerfile
#Redis Image FROM harbor.intra.com/baseimages/centos-base:7.9.2009 ADD redis-4.0.14.tar.gz /usr/local/src RUN ln -sv /usr/local/src/redis-4.0.14 /usr/local/redis && cd /usr/local/redis && make && cp src/redis-cli /usr/sbin/ && cp src/redis-server /usr/sbin/ && mkdir -pv /data/redis-data ADD redis.conf /usr/local/redis/redis.conf ADD run_redis.sh /usr/local/redis/run_redis.sh EXPOSE 6379 CMD ["/usr/local/redis/run_redis.sh"]
build-command.sh
#!/bin/bash TAG=$1 docker build -t harbor.intra.com/wework/redis:${TAG} . sleep 3 docker push harbor.intra.com/wework/redis:${TAG}
redis.conf
bind 0.0.0.0 protected-mode yes port 6379 tcp-backlog 511 timeout 0 tcp-keepalive 300 daemonize yes supervised no pidfile /var/run/redis_6379.pid loglevel notice logfile "" databases 16 always-show-logo yes save 900 1 save 5 1 save 300 10 save 60 10000 stop-writes-on-bgsave-error no rdbcompression yes rdbchecksum yes dbfilename dump.rdb dir /data/redis-data slave-serve-stale-data yes slave-read-only yes repl-diskless-sync no repl-diskless-sync-delay 5 repl-disable-tcp-nodelay no slave-priority 100 requirepass 123456 lazyfree-lazy-eviction no lazyfree-lazy-expire no lazyfree-lazy-server-del no slave-lazy-flush no appendonly no appendfilename "appendonly.aof" appendfsync everysec no-appendfsync-on-rewrite no auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb aof-load-truncated yes aof-use-rdb-preamble no lua-time-limit 5000 slowlog-log-slower-than 10000 slowlog-max-len 128 latency-monitor-threshold 0 notify-keyspace-events "" hash-max-ziplist-entries 512 hash-max-ziplist-value 64 list-max-ziplist-size -2 list-compress-depth 0 set-max-intset-entries 512 zset-max-ziplist-entries 128 zset-max-ziplist-value 64 hll-sparse-max-bytes 3000 activerehashing yes client-output-buffer-limit normal 0 0 0 client-output-buffer-limit slave 256mb 64mb 60 client-output-buffer-limit pubsub 32mb 8mb 60 hz 10 aof-rewrite-incremental-fsync yes
run_redis.sh
#!/bin/bash /usr/sbin/redis-server /usr/local/redis/redis.conf tail -f /etc/hosts
Build Redis image
root@k8s-master-01:/opt/k8s-data/dockerfile/web/wework/redis# ./build-command.sh v4.0.14 Successfully built f13c1ccdf5d6 Successfully tagged harbor.intra.com/wework/redis:v4.0.14 The push refers to repository [harbor.intra.com/wework/redis] e045520d5142: Pushed f7c5723d3227: Pushed bf4069c34244: Pushed d383bf570da4: Pushed 6f2f514dbcfd: Pushed 42a5df432d46: Pushed 7a6c7dc8d8df: Pushed c91e83206e44: Pushed bf0b39b2f6ed: Pushed 174f56854903: Mounted from wework/tomcat-app1 v4.0.14: digest: sha256:22882e70d65d693933f5cb61b2b449a4cef62ee65c28530030ff94b06a7eee1b size: 2416 root@k8s-master-01:/opt/k8s-data/dockerfile/web/wework/redis# docker images REPOSITORY TAG IMAGE ID CREATED SIZE harbor.intra.com/wework/redis v4.0.14 f13c1ccdf5d6 12 minutes ago 3.28GB
Test whether the mirror can work
root@k8s-master-01:/opt/k8s-data/dockerfile/web/wework/redis# docker run -it --rm harbor.intra.com/wework/redis:v4.0.14 7:C 10 Aug 12:48:10.607 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo 7:C 10 Aug 12:48:10.609 # Redis version=4.0.14, bits=64, commit=00000000, modified=0, pid=7, just started 7:C 10 Aug 12:48:10.609 # Configuration loaded 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 172.17.0.2 a9b6b1d26209
3. Redis single machine yaml
On the Nfs server
mkdir -p /data/k8s/wework/redis-datadir-1
yaml of PV
redis-persistentvolume.yaml
--- apiVersion: v1 kind: PersistentVolume metadata: name: redis-datadir-pv-1 namespace: wework spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce nfs: path: /data/k8s/wework/redis-datadir-1 server: 192.168.31.109
yaml of PVC
redis-persistentvolumeclaim.yaml
--- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: redis-datadir-pvc-1 namespace: wework spec: volumeName: redis-datadir-pv-1 accessModes: - ReadWriteOnce resources: requests: storage: 10Gi
Create pv
root@k8s-master-01:/opt/k8s-data/yaml/wework/redis/pv# kubectl apply -f redis-persistentvolume.yaml persistentvolume/redis-datadir-pv-1 created root@k8s-master-01:/opt/k8s-data/yaml/wework/redis/pv# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE redis-datadir-pv-1 10Gi RWO Retain Available 6s test 1Gi RWX Retain Available nfs 57d zookeeper-datadir-pv-1 20Gi RWO Retain Bound wework/zookeeper-datadir-pvc-1 15h zookeeper-datadir-pv-2 20Gi RWO Retain Bound wework/zookeeper-datadir-pvc-2 15h zookeeper-datadir-pv-3 20Gi RWO Retain Bound wework/zookeeper-datadir-pvc-3 15h
Create pvc
root@k8s-master-01:/opt/k8s-data/yaml/wework/redis/pv# kubectl apply -f redis-persistentvolumeclaim.yaml persistentvolumeclaim/redis-datadir-pvc-1 created root@k8s-master-01:/opt/k8s-data/yaml/wework/redis/pv# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE redis-datadir-pv-1 10Gi RWO Retain Bound wework/redis-datadir-pvc-1 2m15s test 1Gi RWX Retain Available nfs 57d zookeeper-datadir-pv-1 20Gi RWO Retain Bound wework/zookeeper-datadir-pvc-1 15h zookeeper-datadir-pv-2 20Gi RWO Retain Bound wework/zookeeper-datadir-pvc-2 15h zookeeper-datadir-pv-3 20Gi RWO Retain Bound wework/zookeeper-datadir-pvc-3 15h root@k8s-master-01:/opt/k8s-data/yaml/wework/redis/pv# kubectl get pvc -n wework NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE redis-datadir-pvc-1 Pending redis-datadir-pv-1 0 10s zookeeper-datadir-pvc-1 Bound zookeeper-datadir-pv-1 20Gi RWO 15h zookeeper-datadir-pvc-2 Bound zookeeper-datadir-pv-2 20Gi RWO 15h zookeeper-datadir-pvc-3 Bound zookeeper-datadir-pv-3 20Gi RWO 15h root@k8s-master-01:/opt/k8s-data/yaml/wework/redis/pv# kubectl get pvc -n wework NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE redis-datadir-pvc-1 Bound redis-datadir-pv-1 10Gi RWO 14s zookeeper-datadir-pvc-1 Bound zookeeper-datadir-pv-1 20Gi RWO 15h zookeeper-datadir-pvc-2 Bound zookeeper-datadir-pv-2 20Gi RWO 15h zookeeper-datadir-pvc-3 Bound zookeeper-datadir-pv-3 20Gi RWO 15h
yaml of Redis deployment
redis.yaml kind: Deployment apiVersion: apps/v1 metadata: labels: app: devops-redis name: deploy-devops-redis namespace: wework spec: replicas: 1 selector: matchLabels: app: devops-redis template: metadata: labels: app: devops-redis spec: containers: - name: redis-container image: harbor.intra.com/wework/redis:v4.0.14 imagePullPolicy: Always volumeMounts: - mountPath: "/data/redis-data/" name: redis-datadir volumes: - name: redis-datadir persistentVolumeClaim: claimName: redis-datadir-pvc-1 --- kind: Service apiVersion: v1 metadata: labels: app: devops-redis name: srv-devops-redis namespace: wework spec: type: NodePort ports: - name: http port: 6379 targetPort: 6379 nodePort: 36379 selector: app: devops-redis sessionAffinity: ClientIP sessionAffinityConfig: clientIP: timeoutSeconds: 10800
Create Redis Deployment
root@k8s-master-01:/opt/k8s-data/yaml/wework/redis# kubectl apply -f redis.yaml deployment.apps/deploy-devops-redis created service/srv-devops-redis created root@k8s-master-01:/opt/k8s-data/yaml/wework/redis# kubectl get pods -n wework NAME READY STATUS RESTARTS AGE deploy-devops-redis-7864f5d7dc-v9tq8 1/1 Running 0 8m34s wework-nginx-deployment-cdbb4945f-7xgx5 1/1 Running 0 4h19m wework-tomcat-app1-deployment-65d8d46957-s4666 1/1 Running 0 4h19m zookeeper1-699d46468c-8jq4x 1/1 Running 0 167m zookeeper2-7cc484778-gj45x 1/1 Running 0 167m zookeeper3-cdf484f7c-jh6hz 1/1 Running 0 167m root@k8s-master-01:/opt/k8s-data/yaml/wework/redis# kubectl get svc -n wework NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE srv-devops-redis NodePort 10.200.67.224 <none> 6379:36379/TCP 9m7s wework-nginx-service NodePort 10.200.89.252 <none> 80:30090/TCP,443:30091/TCP 47h wework-tomcat-app1-service ClusterIP 10.200.21.158 <none> 80/TCP 28h zookeeper ClusterIP 10.200.117.19 <none> 2181/TCP 167m zookeeper1 NodePort 10.200.167.230 <none> 2181:32181/TCP,2888:31774/TCP,3888:56670/TCP 167m zookeeper2 NodePort 10.200.36.129 <none> 2181:32182/TCP,2888:46321/TCP,3888:30984/TCP 167m zookeeper3 NodePort 10.200.190.129 <none> 2181:32183/TCP,2888:61447/TCP,3888:51393/TCP 167m
Test whether redis is deleted after data is written and whether data is lost
[root@deploy-devops-redis-7864f5d7dc-v9tq8 /]# redis-cli 127.0.0.1:6379> AUTH 123456 OK 127.0.0.1:6379> set key1 value1 OK 127.0.0.1:6379> keys * 1) "key1" 127.0.0.1:6379>
Confirm whether the data is generated on the nfs server
root@haproxy-1:~# ll /data/k8s/wework/redis-datadir-1 total 12 drwxr-xr-x 2 root root 4096 Aug 10 13:14 ./ drwxr-xr-x 7 root root 4096 Aug 10 12:54 ../ -rw-r--r-- 1 root root 111 Aug 10 13:14 dump.rdb
Test deleting Pod
root@k8s-master-01:/opt/k8s-data/yaml/wework/redis# kubectl get pods -n wework NAME READY STATUS RESTARTS AGE deploy-devops-redis-7864f5d7dc-v9tq8 1/1 Running 0 15m wework-nginx-deployment-cdbb4945f-7xgx5 1/1 Running 0 4h26m wework-tomcat-app1-deployment-65d8d46957-s4666 1/1 Running 0 4h26m zookeeper1-699d46468c-8jq4x 1/1 Running 0 173m zookeeper2-7cc484778-gj45x 1/1 Running 0 173m zookeeper3-cdf484f7c-jh6hz 1/1 Running 0 173m root@k8s-master-01:/opt/k8s-data/yaml/wework/redis# kubectl delete pods deploy-devops-redis-7864f5d7dc-v9tq8 -n wework pod "deploy-devops-redis-7864f5d7dc-v9tq8" deleted root@k8s-master-01:/opt/k8s-data/yaml/wework/redis# kubectl get pods -n wework NAME READY STATUS RESTARTS AGE deploy-devops-redis-7864f5d7dc-lgx48 1/1 Running 0 35s wework-nginx-deployment-cdbb4945f-7xgx5 1/1 Running 0 4h27m wework-tomcat-app1-deployment-65d8d46957-s4666 1/1 Running 0 4h27m zookeeper1-699d46468c-8jq4x 1/1 Running 0 174m zookeeper2-7cc484778-gj45x 1/1 Running 0 174m zookeeper3-cdf484f7c-jh6hz 1/1 Running 0 174m
Confirm that the data is still there
[root@deploy-devops-redis-7864f5d7dc-lgx48 /]# redis-cli 127.0.0.1:6379> auth 123456 OK 127.0.0.1:6379> keys * 1) "key1" 127.0.0.1:6379> get key1 "value1" 127.0.0.1:6379>
Batch write data to Redis through python
import redis import time pool = redis.ConnectionPool(host="192.168.31.113",port="36379",password="123456",decode_responses=True) r = redis.Redis(connection_pool=pool) for i in range(100): r.set("key-m49_%s" % i,"value-m49_%s" % i) data=r.get("key-m49_%s" % i) print(data)
Then go to Redis to query
82) "key-m49_36" 83) "key-m49_10" 84) "key-m49_15" 85) "key-m49_21" 86) "key-m49_74" 87) "key-m49_50" 88) "key-m49_42" 89) "key-m49_31" 90) "key-m49_79" 91) "key-m49_90" 92) "key-m49_16" 93) "key-m49_49" 94) "key-m49_81" 95) "key-m49_12" 96) "key-m49_59" 97) "key-m49_66" 98) "key-m49_65" 99) "key-m49_54" 100) "key-m49_96" 101) "key-m49_34" 127.0.0.1:6379> get "key-m49_96" "value-m49_96" 127.0.0.1:6379>
4. Redis-cluster
Create a directory on the Nfs server
mkdir /data/k8s/wework/redis{0..5}
yaml file of Pv
apiVersion: v1 kind: PersistentVolume metadata: name: redis-cluster-pv0 spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce nfs: server: 192.168.31.109 path: /data/k8s/wework/redis0 --- apiVersion: v1 kind: PersistentVolume metadata: name: redis-cluster-pv1 spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce nfs: server: 192.168.31.109 path: /data/k8s/wework/redis1 --- apiVersion: v1 kind: PersistentVolume metadata: name: redis-cluster-pv2 spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce nfs: server: 192.168.31.109 path: /data/k8s/wework/redis2 --- apiVersion: v1 kind: PersistentVolume metadata: name: redis-cluster-pv3 spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce nfs: server: 192.168.31.109 path: /data/k8s/wework/redis3 --- apiVersion: v1 kind: PersistentVolume metadata: name: redis-cluster-pv4 spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce nfs: server: 192.168.31.109 path: /data/k8s/wework/redis4 --- apiVersion: v1 kind: PersistentVolume metadata: name: redis-cluster-pv5 spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce nfs: server: 192.168.31.109 path: /data/k8s/wework/redis5
Create pv
root@k8s-master-01:/opt/k8s-data/yaml/wework/redis-cluster/pv# kubectl apply -f redis-cluster-pv.yaml persistentvolume/redis-cluster-pv0 created persistentvolume/redis-cluster-pv1 created persistentvolume/redis-cluster-pv2 created persistentvolume/redis-cluster-pv3 created persistentvolume/redis-cluster-pv4 created persistentvolume/redis-cluster-pv5 created root@k8s-master-01:/opt/k8s-data/yaml/wework/redis-cluster/pv# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE redis-cluster-pv0 5Gi RWO Retain Available 3s redis-cluster-pv1 5Gi RWO Retain Available 3s redis-cluster-pv2 5Gi RWO Retain Available 3s redis-cluster-pv3 5Gi RWO Retain Available 3s redis-cluster-pv4 5Gi RWO Retain Available 3s redis-cluster-pv5 5Gi RWO Retain Available 3s redis-datadir-pv-1 10Gi RWO Retain Bound wework/redis-datadir-pvc-1 49m test 1Gi RWX Retain Available nfs 57d zookeeper-datadir-pv-1 20Gi RWO Retain Bound wework/zookeeper-datadir-pvc-1 16h zookeeper-datadir-pv-2 20Gi RWO Retain Bound wework/zookeeper-datadir-pvc-2 16h zookeeper-datadir-pv-3 20Gi RWO Retain Bound wework/zookeeper-datadir-pvc-3 16h
Configuration file redis.conf
appendonly yes cluster-enabled yes cluster-config-file /var/lib/redis/nodes.conf cluster-node-timeout 5000 dir /var/lib/redis port 6379
Create configMap
root@k8s-master-01:/opt/k8s-data/yaml/wework/redis-cluster# kubectl create configmap redis-conf --from-file=redis.conf -n wework configmap/redis-conf created root@k8s-master-01:/opt/k8s-data/yaml/wework/redis-cluster# kubectl get configmaps -n wework NAME DATA AGE kube-root-ca.crt 1 2d redis-conf 1 32s root@k8s-master-01:/opt/k8s-data/yaml/wework/redis-cluster# kubectl describe configmaps redis-conf -n wework Name: redis-conf Namespace: wework Labels: <none> Annotations: <none> Data ==== redis.conf: ---- appendonly yes cluster-enabled yes cluster-config-file /var/lib/redis/nodes.conf cluster-node-timeout 5000 dir /var/lib/redis port 6379 Events: <none>
Redis Statefulset yaml
redis.yaml apiVersion: v1 kind: Service metadata: name: redis namespace: wework labels: app: redis spec: selector: app: redis appCluster: redis-cluster ports: - name: redis port: 6379 clusterIP: None --- apiVersion: v1 kind: Service metadata: name: redis-access namespace: wework labels: app: redis spec: selector: app: redis appCluster: redis-cluster ports: - name: redis-access protocol: TCP port: 6379 targetPort: 6379 --- apiVersion: apps/v1 kind: StatefulSet metadata: name: redis namespace: wework spec: serviceName: redis replicas: 6 selector: matchLabels: app: redis appCluster: redis-cluster template: metadata: labels: app: redis appCluster: redis-cluster spec: terminationGracePeriodSeconds: 20 affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - redis topologyKey: kubernetes.io/hostname containers: - name: redis image: redis:4.0.14 command: - "redis-server" args: - "/etc/redis/redis.conf" - "--protected-mode" - "no" resources: requests: cpu: "500m" memory: "500Mi" ports: - containerPort: 6379 name: redis protocol: TCP - containerPort: 16379 name: cluster protocol: TCP volumeMounts: - name: conf mountPath: /etc/redis - name: data mountPath: /var/lib/redis volumes: - name: conf configMap: name: redis-conf items: - key: redis.conf path: redis.conf volumeClaimTemplates: - metadata: name: data namespace: wework spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 5Gi
Create Redis StatefulSet
Statefulset will be created one by one. If the previous one is successful, the later one will be created
root@k8s-master-01:/opt/k8s-data/yaml/wework/redis-cluster# kubectl apply -f redis.yaml service/redis created service/redis-access created statefulset.apps/redis created root@k8s-master-01:/opt/k8s-data/yaml/wework/redis-cluster# kubectl get pods -n wework NAME READY STATUS RESTARTS AGE deploy-devops-redis-7864f5d7dc-lgx48 1/1 Running 0 46m redis-0 0/1 ContainerCreating 0 44s wework-nginx-deployment-cdbb4945f-7xgx5 1/1 Running 0 5h12m wework-tomcat-app1-deployment-65d8d46957-s4666 1/1 Running 0 5h12m zookeeper1-699d46468c-8jq4x 1/1 Running 0 3h40m zookeeper2-7cc484778-gj45x 1/1 Running 0 3h40m zookeeper3-cdf484f7c-jh6hz 1/1 Running 0 3h40m root@k8s-master-01:/opt/k8s-data/yaml/wework/redis-cluster# kubectl get pods -n wework NAME READY STATUS RESTARTS AGE deploy-devops-redis-7864f5d7dc-lgx48 1/1 Running 0 47m redis-0 1/1 Running 0 98s redis-1 0/1 ContainerCreating 0 24s wework-nginx-deployment-cdbb4945f-7xgx5 1/1 Running 0 5h13m wework-tomcat-app1-deployment-65d8d46957-s4666 1/1 Running 0 5h13m zookeeper1-699d46468c-8jq4x 1/1 Running 0 3h41m zookeeper2-7cc484778-gj45x 1/1 Running 0 3h41m zookeeper3-cdf484f7c-jh6hz 1/1 Running 0 3h41m root@k8s-master-01:/opt/k8s-data/yaml/wework/redis-cluster# kubectl get pods -n wework NAME READY STATUS RESTARTS AGE deploy-devops-redis-7864f5d7dc-lgx48 1/1 Running 0 48m redis-0 1/1 Running 0 3m36s redis-1 1/1 Running 0 2m22s redis-2 0/1 ContainerCreating 0 61s wework-nginx-deployment-cdbb4945f-7xgx5 1/1 Running 0 5h15m wework-tomcat-app1-deployment-65d8d46957-s4666 1/1 Running 0 5h15m zookeeper1-699d46468c-8jq4x 1/1 Running 0 3h43m zookeeper2-7cc484778-gj45x 1/1 Running 0 3h43m zookeeper3-cdf484f7c-jh6hz 1/1 Running 0 3h43m root@k8s-master-01:/opt/k8s-data/yaml/wework/redis-cluster# kubectl get pods -n wework NAME READY STATUS RESTARTS AGE deploy-devops-redis-7864f5d7dc-lgx48 1/1 Running 0 49m redis-0 1/1 Running 0 4m18s redis-1 1/1 Running 0 3m4s redis-2 1/1 Running 0 103s redis-3 1/1 Running 0 22s redis-4 1/1 Running 0 18s redis-5 1/1 Running 0 14s wework-nginx-deployment-cdbb4945f-7xgx5 1/1 Running 0 5h16m wework-tomcat-app1-deployment-65d8d46957-s4666 1/1 Running 0 5h16m zookeeper1-699d46468c-8jq4x 1/1 Running 0 3h43m zookeeper2-7cc484778-gj45x 1/1 Running 0 3h43m zookeeper3-cdf484f7c-jh6hz 1/1 Running 0 3h43m
Temporarily start a pod to initialize the Redis cluster
root@k8s-master-01:~# kubectl run -it ubuntu1804 --image=ubuntu:18.04 --restart=Never -n wework bash ## Replace apt warehouse root@ubuntu:/# cat > /etc/apt/sources.list << EOF deb http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse deb http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse deb http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse deb http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse deb http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse EOF root@ubuntu:/# apt update root@ubuntu:/# apt install python2.7 python-pip redis-tools dnsutils iputils-ping net-tools -y root@ubuntu1804:/# pip install --upgrade pip Collecting pip Downloading https://files.pythonhosted.org/packages/27/79/8a850fe3496446ff0d584327ae44e7500daf6764ca1a382d2d02789accf7/pip-20.3.4-py2.py3-none-any.whl (1.5MB) 100% |################################| 1.5MB 788kB/s Installing collected packages: pip Found existing installation: pip 9.0.1 Not uninstalling pip at /usr/lib/python2.7/dist-packages, outside environment /usr Successfully installed pip-20.3.4 Collecting redis-trib==0.5.1 Downloading redis-trib-0.5.1.tar.gz (10 kB) Collecting Werkzeug Downloading Werkzeug-1.0.1-py2.py3-none-any.whl (298 kB) |################################| 298 kB 1.1 MB/s Collecting click Downloading click-7.1.2-py2.py3-none-any.whl (82 kB) |################################| 82 kB 2.2 MB/s Collecting hiredis Downloading hiredis-1.1.0-cp27-cp27mu-manylinux2010_x86_64.whl (58 kB) |################################| 58 kB 14.5 MB/s Collecting retrying Downloading retrying-1.3.3.tar.gz (10 kB) Requirement already satisfied: six>=1.7.0 in /usr/lib/python2.7/dist-packages (from retrying->redis-trib==0.5.1) (1.11.0) Building wheels for collected packages: redis-trib, retrying Building wheel for redis-trib (setup.py) ... done Created wheel for redis-trib: filename=redis_trib-0.5.1-py2-none-any.whl size=11341 sha256=6f2df4b780df481dabf61d859abb65f7ae73b1a517faa79c093fbf05633733c2 Stored in directory: /root/.cache/pip/wheels/fe/52/82/cf08baa7853197e3f591a295185666ec90f1e44b609d4456d4 Building wheel for retrying (setup.py) ... done Created wheel for retrying: filename=retrying-1.3.3-py2-none-any.whl size=9532 sha256=86dcf1e1445fc7b140c402342d735ad7d7b172e73b8db141dbdd2d9b7eeee510 Stored in directory: /root/.cache/pip/wheels/fa/24/c3/9912f4c9363033bbd0eafbec1b27c65b04d7ea6acd312876b0 Successfully built redis-trib retrying Installing collected packages: Werkzeug, click, hiredis, retrying, redis-trib Successfully installed Werkzeug-1.0.1 click-7.1.2 hiredis-1.1.0 redis-trib-0.5.1 retrying-1.3.3
Create Redis cluster
redis slots range from 0 to 16383, with a total of 16384
redis-trib.py create `dig +short redis-0.redis.wework.svc.magedu.local`:6379 \ `dig +short redis-1.redis.wework.svc.magedu.local`:6379 \ `dig +short redis-2.redis.wework.svc.magedu.local`:6379 Redis-trib 0.5.1 Copyright (c) HunanTV Platform developers INFO:root:Instance at 172.100.140.82:6379 checked INFO:root:Instance at 172.100.109.88:6379 checked INFO:root:Instance at 172.100.76.159:6379 checked INFO:root:Add 5462 slots to 172.100.140.82:6379 INFO:root:Add 5461 slots to 172.100.109.88:6379 INFO:root:Add 5461 slots to 172.100.76.159:6379 # Add redis-3 to redis-0 root@ubuntu1804:/# redis-trib.py replicate --master-addr `dig +short redis-0.redis.wework.svc.magedu.local`:6379 \ --slave-addr `dig +short redis-3.redis.wework.svc.magedu.local`:6379 Redis-trib 0.5.1 Copyright (c) HunanTV Platform developers INFO:root:Instance at 172.100.140.83:6379 has joined 172.100.140.82:6379; now set replica INFO:root:Instance at 172.100.140.83:6379 set as replica to bbe92769df4e5164ec73542064220006d96bdc40 # redis-4 and redis-1 binding root@ubuntu1804:/# redis-trib.py replicate --master-addr `dig +short redis-1.redis.wework.svc.magedu.local`:6379 --slave-addr `dig +short redis-4.redis.wework.svc.magedu.local`:6379 Redis-trib 0.5.1 Copyright (c) HunanTV Platform developers INFO:root:Instance at 172.100.76.160:6379 has joined 172.100.76.159:6379; now set replica INFO:root:Instance at 172.100.76.160:6379 set as replica to 0c3ff3127c1cfcff63b96c51b727977cf619c9b3 # redis-5 and redis-2 binding root@ubuntu1804:/# redis-trib.py replicate --master-addr `dig +short redis-2.redis.wework.svc.magedu.local`:6379 --slave-addr `dig +short redis-5.redis.wework.svc.magedu.local`:6379 Redis-trib 0.5.1 Copyright (c) HunanTV Platform developers INFO:root:Instance at 172.100.109.89:6379 has joined 172.100.109.88:6379; now set replica INFO:root:Instance at 172.100.109.89:6379 set as replica to 98b86162f083a3f6269ed5abdfac9f3535729f90
Connect to any pod in the Redis cluster
root@redis-5:/data# redis-cli 127.0.0.1:6379> CLUSTER INFO cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:3 cluster_current_epoch:5 cluster_my_epoch:1 cluster_stats_messages_ping_sent:483 cluster_stats_messages_pong_sent:483 cluster_stats_messages_sent:966 cluster_stats_messages_ping_received:478 cluster_stats_messages_pong_received:483 cluster_stats_messages_meet_received:5 cluster_stats_messages_received:966 127.0.0.1:6379> CLUSTER NODES 93fb7651914a1dad36190f55df96e167b92bc36b 172.100.76.160:6379@16379 slave 0c3ff3127c1cfcff63b96c51b727977cf619c9b3 0 1660118751907 2 connected 0c3ff3127c1cfcff63b96c51b727977cf619c9b3 172.100.76.159:6379@16379 master - 0 1660118751000 2 connected 10923-16383 bbe92769df4e5164ec73542064220006d96bdc40 172.100.140.82:6379@16379 master - 0 1660118750000 0 connected 0-5461 b22c872939d02f9b890f996d353b83ef7776644c 172.100.140.83:6379@16379 slave bbe92769df4e5164ec73542064220006d96bdc40 0 1660118750000 0 connected 143e2c972b90ba375269b8fafa64422e8b9635b0 172.100.109.89:6379@16379 myself,slave 98b86162f083a3f6269ed5abdfac9f3535729f90 0 1660118751000 5 connected 98b86162f083a3f6269ed5abdfac9f3535729f90 172.100.109.88:6379@16379 master - 0 1660118751000 1 connected 5462-10922 127.0.0.1:6379>
Trying to write data on different nodes
root@redis-0:/data# redis-cli 127.0.0.1:6379> keys * (empty list or set) 127.0.0.1:6379> set key1 val1 (error) MOVED 9189 172.100.109.88:6379 127.0.0.1:6379> root@redis-0:/data# redis-cli 127.0.0.1:6379> set key2 val2 OK 127.0.0.1:6379> set key3 val3 OK 127.0.0.1:6379> set key4 val4 (error) MOVED 13120 172.100.76.159:6379 127.0.0.1:6379> keys * 1) "key3" 2) "key2" root@redis-1:/data# redis-cli 127.0.0.1:6379> set key1 val1 (error) MOVED 9189 172.100.109.88:6379 127.0.0.1:6379> set key4 val4 OK 127.0.0.1:6379> keys * 1) "key4" root@redis-2:/data# redis-cli 127.0.0.1:6379> set key1 val1 OK 127.0.0.1:6379>
At this point, the redis cluster statefulset configuration is complete