Implementation environment
master | node1 | node2 |
---|---|---|
192.168.1.1 | 192.168.1.2 | 192.168.1.4 |
#Get tar package [root@master ~]# wget https://releases.hashicorp.com/consul/1.8.4/consul_1.8.4_linux_amd64.zip #Unpack [root@master ~]# unzip consul_1.5.1_linux_amd64.zip Archive: consul_1.5.1_linux_amd64.zip inflating: consul #Moving to / usr/local/bin directory is a place for users to place their own executable programs. It is recommended to put them here. Files with the same name will not be overwritten by system upgrade. [root@master ~]# mv consul /usr/local/bin/ [root@master ~]# chmod +x /usr/local/bin/consul #Start the conusl service and run it in the background [root@master ~]# nohup consul agent -server -bootstrap -ui -data-dir=/var/lib/consul-data -bind=192.168.1.1 -client=0.0.0.0 -node=master &
Command interpretation:
Nohup: to run the {nohup} command in the background
-bootstrap: usually runs on the master node and is automatically elected as a leader
-ui: open the web interface inside consumer
-Data dir: location of data storage
-bind: specify the server IP of the service
-Client: specify the client to access
-Node: the name and host name of the node
Open port:
Nodes of 8300 cluster
8301 access within the cluster
8302 communication in data center
8500 web ui
8600 port to view node information using dns protocol
1 //see conusl information 2 [root@master ~]# consul info 3 agent: 4 check_monitors = 0 5 check_ttls = 0 6 checks = 0 7 services = 0 8 build: 9 prerelease = 10 revision = 40cec984 11 version = 1.5.1 12 consul: 13 acl = disabled 14 bootstrap = true 15 known_datacenters = 1 16 leader = true 17 leader_addr = 192.168.1.1:8300 #Consumer node IP port: 18 server = true 19 raft: 20 applied_index = 60 21 commit_index = 60 22 fsm_pending = 0 23 last_contact = 0 24 last_log_index = 60 25 last_log_term = 2 26 last_snapshot_index = 0 27 last_snapshot_term = 0 28 latest_configuration = [{Suffrage:Voter ID:e2448f78-f220-e848-4316-128872e93ea1 Address:192.168.1.1:8300}] 29 latest_configuration_index = 1 30 num_peers = 0 31 protocol_version = 3 32 protocol_version_max = 3 33 protocol_version_min = 0 34 snapshot_version_max = 1 35 snapshot_version_min = 0 36 state = Leader 37 term = 2 38 runtime: 39 arch = amd64 40 cpu_count = 4 41 goroutines = 81 42 max_procs = 4 43 os = linux 44 version = go1.12.1 45 serf_lan: 46 coordinate_resets = 0 47 encrypted = false 48 event_queue = 1 49 event_time = 2 50 failed = 0 51 health_score = 0 52 intent_queue = 0 53 left = 0 54 member_time = 1 55 members = 1 56 query_queue = 0 57 query_time = 1 58 serf_wan: 59 coordinate_resets = 0 60 encrypted = false 61 event_queue = 0 62 event_time = 1 63 failed = 0 64 health_score = 0 65 intent_queue = 0 66 left = 0 67 member_time = 1 68 members = 1 69 query_queue = 0 70 query_time = 1
1 [root@master ~]# consul members 2 Node Address Status Type Build Protocol DC Segment 3 master 192.168.1.1:8301 alive server 1.5.1 2 dc1 <all>
Add node1 and node2 to the conusl cluster
Here, we use the container to run the consumer service.
#The name here must be noted that this is a pit. Why write 2? It actually means something1 Stands for stand-alone, 2 stands for cluster
#Node node1 [root@node1 ~]# docker run -d --name consu2 -p 8301:8301 -p 8301:8301/udp -p 8500:8500 -p 8600:8600 -p 8600:8600/udp --restart always progrium/consul:latest -join 192.168.1.1 -advertise 192.168.1.2 -client 0.0.0.0 --node=node1 #Node node2 [root@node2 ~]# docker run -d --name consu2 -p 8301:8301 -p 8301:8301/udp -p 8500:8500 -p 8600:8600 -p 8600:8600/udp --restart always progrium/consul:latest -join 192.168.1.1 -advertise 192.168.1.4 -client 0.0.0.0 --node=node2
#Possible problems, ipv4 Traffic forwarding is not turned on, just turn it on!
[root@node1 ~]# docker run -d --name consul2 -p 8301:8301 -p 8301:8301/udp -p 8500:8500 -p 8600:8600 -p 8600:8600/udp --restart always progrium/consul:latest -join 192.168.1.1 -advertise 192.168.1.2 -client 0.0.0.0 --node=node1 WARNING: IPv4 forwarding is disabled. Networking will not work. bd6660df07110e0e524112f931a65d17a54eb29e73cbe47e017bf012e662e07e [root@node1 ~]# echo net.ipv4.ip_forward=1 > /etc/sysctl.conf [root@node1 ~]# systemctl restart network [root@node1 ~]# systemctl daemon-reload [root@node1 ~]# systemctl restart docker [root@node1 ~]# docker restart consul2 consul2
Access address: IP: 8500
Deploy the Registrar service on docker02 and docker03
Registrar is a tool that can automatically discover the services provided by docker container and register or cancel the services in the back-end service registry. The back-end registry supports conusl, etcd, skydns2, zookeeper, etc.
#Execute on node1 node docker run -d --name registrator -v /var/run/docker.sock:/tmp/docker.sock --restart always gliderlabs/registrator consul://192.168.1.2:8500 #Execute node deno2 docker run -d --name registrator -v /var/run/docker.sock:/tmp/docker.sock --restart always gliderlabs/registrator consul://192.168.1.4:8500 #Note that the IP address is the host IP of the execution
Deploy an nginx service in the master
//Dependent environment # yum -y install gcc openssl openssl-devel zlib zlib-devel pcre pcre-devel # useradd -M -s /sbin/nologin nginx # tar -zxf nginx-1.14.0.tar.gz # ./configure --user=nginx --group=nginx \--with-http_stub_status_module --with-http_realip_module --with-pcre --with-http_ssl_module # make && make install # ln -s /usr/local/nginx/sbin/* /usr/local/bin/ # nginx -t nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful # nginx ------------------------------------------------------------------------ #PS: here, nginx acts as a reverse proxy to proxy the container services of nginx on the backend docker02 and docker03. Therefore, we first deploy some services on docker02 and docker03. For convenience, we will see the effect of load. Therefore, after running the container, we will distinguish the contents of the main interface.
As a reverse proxy, nginx acts as the container service of nginx on the backend node1 and node2. Therefore, we first deploy some services on node1 and node2. For convenience, we will see the effect of load. Therefore, after running the container, we will distinguish the contents of the main interface.
node1: web1,web2
node2: web3,web4
1 #Create nginx container-p Arbitrarily map port 80 in the container to the host 2 [root@node1 ~]# docker run -itd --name web1 -p 80 nginx:latest 3 77d39af9c5c8bcaa1839b9002bd09ac093f1e0699cef5bd04515ae38b1ceace8 4 [root@node1 ~]# docker run -itd --name web2 -p 80 nginx:latest 5 98ad0a4a5244cbb812ee1c5d2cee4818edc360d6eaea25c70c88f07e88d04a68 6 #Modify the nginx home page of the container 7 [root@node1 ~]# docker exec -it web1 sh 8 # echo web1 > /usr/share/nginx/html/index.html 9 # curl 127.0.0.1 10 web1 11 [root@node1 ~]# docker exec -it web2 sh 12 # echo web2 > /usr/share/nginx/html/index.html 13 # curl 127.0.0.1 14 web2
1 #Create container 2 [root@node2 ~]# docker run -itd --name web3 -p 80 nginx:latest 3 29e0495b3a9c1db99f33374115d626fa37217ab67313e1e20845ccf2dd7ccb82 4 [root@node2 ~]# docker run -itd --name web4 -p 80 nginx:latest 5 2b2fcd9bd3742b9f806d1804a59a61c1f320bd616e66d5d216d6b2c85231fc20 6 #Modify home page 7 [root@node2 ~]# docker exec -it web3 sh 8 # echo web3 > /usr/share/nginx/html/index.html 9 # curl 127.0.0.1 10 web3 11 # 12 [root@node2 ~]# docker exec -it web4 sh 13 # echo web4 > /usr/share/nginx/html/index.html 14 # curl 127.0.0.1 15 web4
Change the configuration file of nginx service
[root@master ~]# vim /usr/local/nginx/conf/nginx.conf upstream http_backend { {{range service "nginx"}} server {{ .Address }}:{{ .Port }}; {{ end }} } server { listen 8000; server_name localhost; location / { proxy_pass http://http_backend; } }
1 [root@master ~]# vim /usr/local/nginx/conf/nginx.conf 2 116 include /usr/local/nginx/consul/*.conf; 3 [root@master ~]# nginx -t 4 nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok 5 nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful
# restart nginx -s reload
Install the consumer template command
Introduction to consul template
Consul template is an application based on consul to automatically replace configuration files. Before the advent of consul template, most of the service discovery systems we built were similar systems such as Zookeeper and Etcd+Confd.
After consul officially launched its own template system consul template, the dynamic configuration system can be divided into two camps: Etcd+Confd and Consul + consul template. The positioning of consult template is similar to that of Confd. The back end of Confd can be Etcd or consult.
[root@master soft]# unzip consul-template_0.19.5_linux_amd64.zip Archive: consul-template_0.19.5_linux_amd64.zip inflating: consul-template [root@master soft]# mv consul-template /usr/local/bin/ [root@master soft]# chmod +x /usr/local/bin/consul-template [root@master soft]# nohup consul-template -consul-addr 192.168.1.1:8500 -template "/usr/local/nginx/consul/nginx.ctmpl:/usr/local/nginx/consul/vhost.conf:/usr/local/bin/nginx -s reload" & [2] 5729 [root@master soft]# nohup: ignore input and append output to"nohup.out" [root@master soft]# cat /usr/local/nginx/consul/vhost.conf upstream http_backend { server 192.168.1.2:32768; server 192.168.1.2:32769; server 192.168.1.4:32768; server 192.168.1.4:32769; } server { listen 8000; server_name localhost; location / { proxy_pass http://http_backend; } }
Of course, at this time, whether the backend is a newly added nginx web container or deleted, the newly produced configuration file will be updated from time to time. At this time, we are running the command consumer template, and finally add: / usr/local/bin/nginx -s reload.
Common consumer errors:
1. After the browser accesses the consumer UI IP: 8500, report the following:
Consul returned an error. You may have visited a URL that is loading an unknown resource, so you can try going back to the root or try re-submitting your ACL Token/SecretID by going back to ACLs. Try looking in our documentation
Solution: replace the high version browser, Google Firefox, etc.
#Access, load balancing is realized [root@master soft]# curl localhost:8000 web1 [root@master soft]# curl localhost:8000 web2 [root@master soft]# curl localhost:8000 web3 [root@master soft]# curl localhost:8000 web4 [root@master soft]# curl localhost:8000 web1 [root@master soft]# curl localhost:8000 web2 [root@master soft]# curl localhost:8000 web3 [root@master soft]# curl localhost:8000 web4 #Stop the container of node1 host and view the real-time discovery effect [root@node1 sfot]# docker stop web1 ^[[Aweb1 [root@node1 sfot]# docker stop web2 web2 #Access in master [root@master soft]# curl localhost:8000 web3 [root@master soft]# curl localhost:8000 web4 [root@master soft]# curl localhost:8000 web3 [root@master soft]# curl localhost:8000 web4