brief introduction
LVS is the abbreviation of Linux Virtual Server, that is, Linux Virtual Server. It is a free software project initiated by Dr. zhangwensong. In Linux 2 Before kernel 4, when using LVS, the kernel should be recompiled to support LVS functional modules. After 2.4, various functional modules of LVS have been built in, and various functions provided by LVS can be used directly.
Objective: to achieve a high-performance and high availability server cluster through the load balancing technology provided by LVS and Linux operating system, which has good reliability, scalability and operability. So as to achieve the best service performance at low cost.
Load balancer concept
Server load balancer refers to the software or network device set in front of a group of servers with the same or similar functions to reasonably distribute the traffic to the server group, and can transfer the access request to other servers that can work normally in case of failure of one of the servers.
SLB is short for server load balancing. Server load balancing can be divided into LAN server load balancing and WAN server load balancing. SLB in a narrow sense refers to LAN server load balancing, which is opposite to wide area network server load balancing (GSLB).
advantage:
Improve performance ---- the load balancer can realize the load balance between servers, so as to improve the response speed and overall performance of the system;
Improve reliability ---- it can monitor the running status of the server, find the abnormal server in time, and transfer the access request to other servers that can work normally, so as to improve the reliability of the server group
Improve maintainability - servers can be flexibly added according to the development of business volume, the expansion ability of the system is improved, and the management is simplified at the same time.
Load balancing method:
Method based on DNs polling: set multiple DNSA records for the same domain name in the DNs server, and realize server load balancing through DNs polling mechanism.
Method based on application software: the cooperative work and task scheduling among multiple servers are considered in the design of application software. This method generally has a server as the hub to schedule access requests. At the same time, it is required to support access redirection or task scheduling and jump mechanisms in the application layer (such as LVs, Haproxy, nginx, etc.).
Special L4/L7 layer switches are used to realize: that is, we often say load balancer. Generally, it is realized by address translation (NAT) in L4AL7 layer switch. Such as F5, radware, arrav, etc.
LVS architecture
It consists of three parts:
The most front-end load balancing layer --------- is represented by Load Balancer
Server group layer in the middle ------------- represented by Server Array
The lowest data Shared Storage layer is represented by Shared Storage.
1. Load Balancer layer
It is located at the front end of the whole cluster system and is composed of one or more load schedulers (Director servers). The LVS module is installed on the Director Server. The main function of the Director is similar to a router. It contains the routing tables set to complete the LVS function. Through these routing tables, users' requests are distributed to the application server of the Server Array layer. At the same time, the Director Server also needs to install the monitoring module ldirector for Real Server services, which is used to monitor the health status of each Real Server service. When the Real Server is unavailable, it is removed from the LVS routing table and rejoined during recovery.
2. Server Array layer
It is composed of a group of machines actually running application services. Real Server can be one or more of WEB server, MAIL server, FTP server, DNS server and video server. Each Real Server is connected through high-speed LAN or WAN distributed everywhere. In practical applications, Director Server can also play the role of Real Server at the same time.
3. Shared Storage tier
It is a storage area that provides shared storage space and content consistency for all real servers. Physically, it is generally composed of disk array devices. In order to provide content consistency, data can be shared through NFS network file system. However, NFS has poor performance in busy business systems. At this time, cluster file system can be used, such as GFS file system of Red hat, OCFS2 file system provided by oracle, etc.
LVS related terms
VIP fictitious IP Address( Virtual IP Address): Director For providing services to clients IP address RIP real IP Address( Real Server IP Address): Used on nodes under the cluster IP address DIP Director of IP Address( Director IP Address): Director For connecting internal and external networks IP address CIP Client host IP Address( Client IP Address): The client requests the name of the cluster server IP Address, which is used as the source of the request sent to the cluster IP address
The nodes inside the LVS cluster are called real servers, also known as cluster nodes.
LVS operating mode
The IP load balancing technology of LVS is realized through IPVS module, which is the core software of LVS cluster system.
IPVS function: it is installed on the Director Server. At the same time, a Virtual IP address is created on the Director Server. Users must access the service through this Virtual IP address. This Virtual IP is generally called the VIP of LVS, that is, Virtual IP. The access requests first arrive at the load scheduler via VIP, then select a service node from the Real Server list by the load scheduler to respond to the request of the user. When the user's request arrives at the load scheduler, the dispatcher sends the request to the Real Server node of the service provided, and how the Real Server node returns data to the user is the key technology of IPVS implementation.
There are three load balancing mechanisms for IPVS: NAT, TUN and DR.
1,VS/NAT(Virtual Server via Network Address Translation)
That is, the network address translation technology realizes the virtual server. When the user request arrives at the scheduler, the scheduler rewrites the target address of the request message (i.e. virtual IP address) into the selected Real Server address, at the same time, the target port of the message is also changed into the corresponding port of the selected Real Server, and finally sends the message request to the selected Real Server. After the server gets the data, when the Real Server returns the data to the user, it needs to change the source address and source port of the message into virtual IP address and corresponding port through the load scheduler again, and then send the data to the user to complete the whole load scheduling process.
2,VS/TUN (Virtual SeLrver via IP Tunneling)
That is, IP tunnel technology realizes virtual server. Its connection scheduling and management is the same as VS/NAT, but its message forwarding methods are different. In VS/TUN mode, the scheduler uses IP tunneling technology to forward user requests to a Real Server, and this Real Server will directly respond to the user's request, no longer through the front-end conditioner. In addition, there is no requirement for Real Server's geographical location, and it can be located in the same network segment with Director Server. It can also be an independent network. Therefore, in the TUN mode, the scheduler will only handle the message request of the user, and the throughput of the cluster system will be greatly improved.
3,VS/DR(Virtual Server via Direct Routing)
That is to use direct routing technology to realize virtual server. Its connection scheduling and management are the same as those in VS/NAT and VS/TUN, but its message forwarding method is different. VS/DR sends the request to the Real Server by rewriting the MAC address of the request message, and the Real Server returns the response directly to the customer, eliminating the IP tunnel overhead in VS/TUN. This method has the highest and best performance among the three load scheduling mechanisms, but both Director Server and Real Server must have a network card connected to the same physical network segment.
Mode response diagram:
In LVS-DR configuration, the Director forwards all inbound requests to the nodes inside the cluster, but the nodes inside the cluster directly send their replies to the client computer
LVS scheduling algorithm
The scheduling method determines how to distribute the workload among these cluster nodes.
Scheduling method category:
Fixed scheduling algorithm: rr,wrr,dh,sh Dynamic scheduling algorithm: wlc,lblc,lblcr,SED,NQ
rr Round robin scheduling( Round-Robin) Note: assign requests to different in turn RS,That is, in RS Request for equal sharing in. wrr Weighted round robin scheduling( Weighted Round-Robin) Note: according to different RS Weight assignment task. Higher weight RS Priority will be given to the task, and the number of connections assigned will be lower than the weight RS more. dh Destination hash scheduling( Destination Hashing) Description: use the destination address as the keyword to find a static address hash Table to get the required RS. sh Source address hash scheduling( source hashing) Description: use the source address as the keyword to find a static address hash Table to get the required RS. wlc Weighted minimum connection scheduling( weighted leastconnection) Note: suppose each set RS The weights of are wi(i=1..n),Current TCP The number of connections is Ti(i=1..n),Select in turn Ti/Wi Is the smallest RS As next assigned RS. lc Minimum connection scheduling( Least-Connection) Description: IPVS The table stores all active connections. Send a new connection request to the one with the smallest number of current connections RS. lblc Address based minimum connection scheduling( locality-Based Least-Connection) Note: assign requests from the same destination address to the same computer RS If this server is not fully loaded, it will be allocated to the server with the smallest number of connections RS,And take it as the first consideration for the next distribution. lblcr Scheduling based on the minimum number of repeated connections in address band( Locality-Based Least-Connection with Replication) Note: for a destination address, there is one corresponding to it RS Subset. The minimum number of connections in the allocation subset for this address request RS; If all subsets in the server are fully loaded, select a server with a small number of connections from the cluster, add it to this subset and allocate connections; If no modification has been made within a certain period of time, the node with the largest load in the subset will be deleted from the subset. SED Minimum expected delay( shortest expected delay scheduling SED) Description: Based on wlc Algorithm. for example ABC The weight of the three machines is 123 and the number of connections is 123 respectively.So if you use wlc Algorithm, when a new request enters, it may be distributed to ABC Any one of them. use sed Such an operation will be carried out after the algorithm A(1+1)/1 B(1+2)/2 C(1+3)/3 According to the calculation result, give the connection to C. NQ Least queue scheduling( Never Queue Scheduling NQ) Description: no queue is required. If there is one realserver Number of connections=0 Just distribute it directly in the past, and it doesn't need to be carried out in the future sed operation
Working principle of LVS mode
1, Introduction to LVS-DR cluster
1. Basic working principle of LVS
1. When the user sends a request to the load balancing scheduler (Director Server), the scheduler sends the request to the kernel space
2. The preouting chain will first receive the user request, judge the target IP, determine the local IP, and send the data packet to the INPUT chain
3. IPVS works on the INPUT chain. When the user request reaches the INPUT, IPVS will compare the user request with the defined cluster service. If the user requests the defined cluster service, IPVS will forcibly modify the target IP address and port in the packet and send the new packet to the POSTROUTING chain
4. After the POSTROUTING link receives the data packet, it is found that the target IP address is just its own back-end server. Through routing, the data packet is finally sent to the back-end server
2. Working process of VS-DR mode:
First, the request from the CIP of the client computer is sent to the VIP of the director. Then, the director uses the same VIP destination IP address to send the request to the cluster node or the real server. Then, a node of the cluster will reply to the packet and send the packet directly to the client computer (without going through the director), and the destination VIP address used by the reply packet is used as the source IP address.
3. Application characteristics of LVS-DR mode
1) All cluster nodes RS must be in the same physical network RS segment as the Director (i.e. in the same LAN);
2) All inbound (rather than outbound) requests from clients are first received by the Director and forwarded to the cluster node Real Server service;
3) Generally speaking, the cluster node RS is best to bring an external IP instead of using the Director and a fixed machine as the default gateway, so as to directly reply the data packet to the client computer without causing the bottleneck of packet return;
4) The VIP address must be bound on the lo network card on all cluster nodes RS in order to verify the data packets that pass the destination IP other than RS;
5) Because all cluster node RS must bind the VIP address on the lo network card, it brings the ARP problem, that is, the cluster node RS will send the corresponding data packets to the Director VIP by default. Therefore, ARP suppression shall be performed for all cluster node RS, and the request to respond to VIP shall be handed over to LVSDirector;
6) Many operating systems can be used on the RS real server inside the cluster, as long as the operating system can realize ARP hiding, such as Windows, linux and unix;
7) LVS/DR mode does not need to turn on the scheduler forwarding function, which is different from LVS/NAT mode.
8) LVS/DR Director (100 servers) can withstand more concurrent requests and forward more servers than LVS-NAT Director (10-20 servers)
4. LVS-DR mode ARP suppression
If the influence of RS arp is not suppressed, the broadcast message will reach the real server through the physical network card, and there are VIP s on the real server, so it will respond to this request
Solution:
When the front-end route sends the request to the VIP, it can only be the VIP on the Dirctor;
(1) Static address binding; May not have the configuration permission of the router; Director When calling, static address binding will be difficult to apply; (2) arptables Disable ARP for VIP Basically, we have the following commands to disable ARP for VIP at real servers. arptables -F arptables -A INPUT -d $VIP -j DROP arptables -A OUT -s $VIP -j mangle --mangle-ip-s $RIP (3) modify Linux Kernel parameters, set RS Upper VIP Configure as lo Alias of interface, limit Linux Only for the corresponding interface ARP Respond to requests
2, LVS-NAT mode
1. Working principle of LVS-NAT
1. When the user request arrives at the Director Server, the requested data message will first arrive at the preouting chain in the kernel space. At this time, the source IP of the message is CIP and the target IP is VIP
2. The preouting check finds that the destination IP of the packet is the local machine, and sends the packet to the INPUT chain
3. IPVS compares whether the service requested by the packet is a cluster service. If so, modify the target IP address of the packet to the back-end server IP, and then send the packet to the POSTROUTING chain. At this time, the source IP of the message is CIP and the target IP is RIP
4. The POSTROUTING chain sends packets to the Real Server through routing
5. The Real Server compares and finds that the target is its own IP, starts to build a response message and sends it back to the Director Server. At this time, the source IP of the message is RIP and the target IP is CIP
6. Before responding to the client, the Director Server will modify the source IP address to its own VIP address, and then respond to the client. At this time, the source IP of the message is VIP and the target IP is CIP
2. Mode characteristics
1. Cluster nodes must be in a network
2. The real server must point the gateway to the load scheduler
3. RIP is usually a private IP, which is only used for the communication of each cluster node
4. The load scheduler must be located between the client and the real server and act as a gateway
5. Port mapping
6. The operating system of the load scheduler must be Linux, and the real server can use any system
3, LVS-TUN mode
1. Working principle of LVS-TUN
1. When the user request arrives at the Director Server, the requested data message will first arrive at the preouting chain in the kernel space. At this time, the source IP of the message is CIP and the target IP is VIP.
2. The preouting check finds that the destination IP of the packet is the local machine, and sends the packet to the INPUT chain
3. IPVS compares whether the service requested by the data packet is a cluster service. If so, encapsulate a layer of IP message at the head of the request message. The encapsulated source IP is DIP and the target IP is RIP. Then send it to the POSTROUTING chain. At this time, the source IP is DIP and the target IP is RIP
4. The POSTROUTING chain sends the data packet to RS according to the latest encapsulated IP message (because there is an additional layer of IP header encapsulated in the outer layer, it can be understood that it is transmitted through tunnel at this time). At this time, the source IP is DIP and the target IP is RIP
5. After receiving the message, RS finds that it is its own IP address, so it receives the message. After removing the outermost IP, it will find that there is another layer of IP header inside, and the target is its own lo interface VIP. At this time, RS starts to process the request. After processing, it will send it to eth0 network card through lo interface, and then transfer it outward. At this time, the source IP address is VIP and the target IP is CIP
6. Final delivery of response message
2. Mode characteristics
1. Cluster nodes do not have to be in the same physical network, but they must all have public IP (or can be routed)
2. The real server cannot point the gateway to the load scheduler
3. RIP must be a public address
4. The load scheduler is only responsible for inbound requests
5. Port mapping is not supported
6. The sender and receiver must support the tunnel function
LVS Cluster Construction
Host name and address | role |
---|---|
node1: 192.168.16.11 | DS |
node2: 192.168.16.12 | RS |
node3: 192.168.16.13 | RS |
node4: 192.168.16.14 | RS |
node5: 192.168.16.15 | test machine |
Kernel version | 3.10.0-1062.el7.x86_64 |
Release version | CentOS Linux release 7.7.1908 (Core) |
Environmental preparation:
1,Turn off the firewall: [root@node1 ~]# systemctl stop firewalld.service [root@node2 ~]# systemctl stop firewalld.service [root@node3 ~]# systemctl stop firewalld.service [root@node4 ~]# systemctl stop firewalld.service 2,Time to synchronize: view time [root@node1 ~]# date 2021 Tuesday, February 23, 2011:33:24 CST [root@node2 ~]# date 2021 Tuesday, February 23, 2011:33:28 CST [root@node3 ~]# date 2021 Tuesday, February 23, 2011:33:18 CST [root@node4 ~]# date 2021 Tuesday, November 23, 2002:33:34 CST
LVS-DR mode cluster construction
Common parameters and options of ipvsadm
-A --add-service Add a new virtual service -E --edit-service Edit virtual service -D --delete-service Delete virtual service -C --clear Clear all virtual service rules -R --restore Restore virtual service rules -a --add-server Add a new real server to a virtual service -e --edit-server Edit a real server -d --delete-server Delete a real server -L | -l --list Displays the virtual service rules in the kernel -n --numeric Display in digital form IP port -c --connection display ipvs The existing connections in can also be used to analyze the scheduling situation -Z --zero Clear the statistics of forwarded messages -p --persistent Configure persistence time --set tcp tcpfin udp Configure three timeout times( tcp/tcpfin/udp) -t | -u TCP/UDP Virtual service of protocol -g | -m | -i LVS The mode is: DR | NAT | TUN -w Configure the weight of the real server -s Configure the load balancing algorithm, such as:rr, wrr, lc etc. --timeout Display configured tcp/tcpfin/udp Timeout --stats Display historical forwarding message Statistics (cumulative value) --rate Display forwarding rate information (instantaneous value)
Method 1: command operation
DS configuration
1,to configure LVS fictitious IP(VIP) Temporary configuration (restart server disappears): [root@node1 ~]# ifconfig ens33:100 192.168.16.100 netmask 255.255.255.255 up Permanent configuration: [root@node1 ~]# nmcli connection modify ens33 +ipv4.addresses 192.168.16.100/24 [root@node1 ~]# nmcli connection up ens33 The connection was successfully activated( D-Bus Active path:/org/freedesktop/NetworkManager/ActiveConnection/3) [root@node1 ~]# ip a ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:4f:2d:3a brd ff:ff:ff:ff:ff:ff inet 192.168.16.11/24 brd 192.168.16.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever inet 192.168.16.100/24 brd 192.168.16.255 scope global secondary noprefixroute ens33 valid_lft forever preferred_lft forever inet6 fe80::998b:20de:c321:ff69/64 scope link noprefixroute valid_lft forever preferred_lft forever 2,Add configuration manually LVS Service and add two RS [root@node1 ~]# yum install ipvsadm.x86_64 -y [root@node1 ~]# ipvsadm -A -t 192.168.16.100:80 -s rr [root@node1 ~]# ipvsadm -a -t 192.168.16.100:80 -r 192.168.16.12:80 -g [root@node1 ~]# ipvsadm -a -t 192.168.16.100:80 -r 192.168.16.13:80 -g [root@node1 ~]# ipvsadm -a -t 192.168.16.100:80 -r 192.168.16.14:80 -g 3,View previous configuration [root@node1 ~]# ipvsadm -L -n IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.16.100:80 rr -> 192.168.16.12:80 Route 1 0 0 -> 192.168.16.13:80 Route 1 0 0 -> 192.168.16.14:80 Route 1 0 0 4,delete LVS-DR Servers in delete RS [root@node1 ~]# ipvsadm -d -t 192.168.16.100:80 -r 192.168.16.12:80 delete DS [root@node1 ~]# ipvsadm -D -t 192.168.16.100:80 see: [root@node1 ~]# ipvsadm -L -n IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn
RS configuration
1,install httpd package [root@node2 ~]# yum install httpd -y [root@node3 ~]# yum install httpd -y [root@node4 ~]# yum install httpd -y 2,Start service [root@node2 ~]# systemctl start httpd.service [root@node3 ~]# systemctl start httpd.service [root@node4 ~]# systemctl start httpd.service 3,Check whether the service is started [root@node2 ~]# netstat -lnupt | grep 80 tcp6 0 0 :::80 :::* LISTEN 1595/httpd 4,Edit test page content [root@node2 ~]# echo 'this is a test1' > /var/www/html/index.html [root@node3 ~]# echo 'this is a test2' > /var/www/html/index.html [root@node4 ~]# echo 'this is a test3' > /var/www/html/index.html 5,test RS Is the page accessible [root@node1 ~]# curl 192.168.16.12 this is a test1 [root@node1 ~]# curl 192.168.16.13 this is a test2 [root@node1 ~]# curl 192.168.16.14 this is a test3 6,take VIP Bind to lo Interface [root@node2 ~]# ifconfig lo:100 192.168.16.100 netmask 255.255.255.255 up [root@node3 ~]# ifconfig lo:100 192.168.16.100 netmask 255.255.255.255 up [root@node4 ~]# ifconfig lo:100 192.168.16.100 netmask 255.255.255.255 up 7,Add native access VIP Routing [root@node2 ~]# route add -host 192.18.16.100 dev lo [root@node3 ~]# route add -host 192.18.16.100 dev lo [root@node4 ~]# route add -host 192.18.16.100 dev lo 8,Hand in RS Terminal inhibition ARP Respond to each real server Adjust kernel parameters and close arp response # echo "1" > /proc/sys/net/ipv4/conf/lo/arp_ignore # echo "2" > /proc/sys/net/ipv4/conf/lo/arp_announce # echo "1" > /proc/sys/net/ipv4/conf/all/arp_ignore # echo "2" > /proc/sys/net/ipv4/conf/all/arp_announce > arp_ignore- INTEGER Define the target address as local IP of ARP Ask for different response modes 0 -(Default value): respond to any local request on any network interface IP Addressable arp Query request. 1 - Answer only the target IP The address is the local address of the visiting network interface ARP Query request. 2 - Answer only the target IP The address is the local address of the visiting network interface ARP Query request and visit IP Must be within the subnet segment of the network interface. 3 - Not corresponding to the network interface arp Request, and only respond to the set unique and connection address. 4-7 -Keep unused. 8 - Do not respond to all (local address) arp Query. >arp_announce - INTEGER On the network interface, local IP From the address, ARP Respond and make corresponding level restrictions; Determine different degrees of restrictions and announce restrictions from local sources IP Address issue ARP Requested interface 0 -(Default) on any network interface( eth0,eth1,lo)Any local address on 1 - Try to avoid making decisions at the local address of the subnet segment of the network interface arp Response, when initiated ARP Source of request IP It is useful when the address is set to arrive at this network interface through routing. At this time, the access will be checked IP Is it within the subnet segment on all interfaces ip one of. If you should visit IP If it does not belong to the subnet segment on each network interface, it will be processed in the way of level 2. 2 - The most appropriate local address for the query target will be ignored in this mode IP The source address of the packet and try to select a local address that can communicate with that address. The first is to select the subnet of all network interfaces, and the outbound access subnet contains the target IP Local address of the address. If no suitable address is found, the current network sending interface will be selected or others may receive the address ARP Send the response through the network interface. Limited use of local vip Address as the preferred network interface.
node5 test
[root@node5 ~]# for ((i=1;i<=10;i++)) > do > curl 192.168.16.100 > done this is a test3 this is a test2 this is a test1 this is a test3 this is a test2 this is a test1 this is a test3 this is a test2 this is a test1 this is a test3 [root@node1 ~]# ipvsadm -L -n IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.16.100:80 rr -> 192.168.16.12:80 Route 1 0 3 -> 192.168.16.13:80 Route 1 0 3 -> 192.168.16.14:80 Route 1 0 4
Method 2: edit script operation
DS script editing: node1
vim /etc/init.d/lvs_ds #!/bin/sh # # Startup script handle the initialisation of LVS # chkconfig: - 28 72 # description: Initialise the Linux Virtual Server for DR # ### BEGIN INIT INFO # Provides: ipvsadm # Required-Start: $local_fs $network $named # Required-Stop: $local_fs $remote_fs $network # Short-Description: Initialise the Linux Virtual Server # Description: The Linux Virtual Server is a highly scalable and highly # available server built on a cluster of real servers, with the load # balancer running on Linux. # description: start LVS of DR LOCK=/var/lock/ipvsadm.lock VIP=192.168.16.100 RIP1=192.168.16.12 RIP2=192.168.16.13 RIP3=192.168.16.14 DipName=ens33 . /etc/rc.d/init.d/functions start() { PID=`ipvsadm -Ln | grep ${VIP} | wc -l` if [ $PID -gt 0 ]; then echo "The LVS-DR Server is already running !" else #Set the Virtual IP Address /sbin/ifconfig ${DipName}:10 $VIP broadcast $VIP netmask 255.255.255.255 up /sbin/route add -host $VIP dev ${DipName}:10 #Clear IPVS Table /sbin/ipvsadm -C #Set Lvs /sbin/ipvsadm -At $VIP:80 -s rr /sbin/ipvsadm -at $VIP:80 -r $RIP1:80 -g /sbin/ipvsadm -at $VIP:80 -r $RIP2:80 -g /sbin/ipvsadm -at $VIP:80 -r $RIP3:80 -g /bin/touch $LOCK #Run Lvs echo "starting LVS-DR Server is ok !" fi } stop() { #clear Lvs and vip /sbin/ipvsadm -C /sbin/route del -host $VIP dev ${DipName}:10 /sbin/ifconfig ${DipName}:10 down >/dev/null rm -rf $LOCK echo "stopping LVS-DR server is ok !" } status() { if [ -e $LOCK ]; then echo "The LVS-DR Server is already running !" else echo "The LVS-DR Server is not running !" fi } case "$1" in start) start ;; stop) stop ;; restart) stop start ;; status) status ;; *) echo "Usage: $1 {start|stop|restart|status}" exit 1 esac exit 0 Add execution permission [root@node1 ~]# chmod +x /etc/init.d/lvs_ds Open service [root@node1 ~]# chkconfig --add lvs_ds [root@node1 ~]# chkconfig lvs_ds on
Test:
[root@node1 ~]# ipvsadm -L -n IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn [root@node1 ~]# systemctl start lvs_ds [root@node1 ~]# ipvsadm -L -n IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.16.100:80 rr -> 192.168.16.12:80 Route 1 0 0 -> 192.168.16.13:80 Route 1 0 0 -> 192.168.16.14:80 Route 1 0 0
RS script editing and configuration: node2, node3, node4
1,Edit script vim /etc/init.d/lvs_rs #!/bin/sh # # Startup script handle the initialisation of LVS # chkconfig: - 28 72 # description: Initialise the Linux Virtual Server for DR # ### BEGIN INIT INFO # Provides: ipvsadm # Required-Start: $local_fs $network $named # Required-Stop: $local_fs $remote_fs $network # Short-Description: Initialise the Linux Virtual Server # Description: The Linux Virtual Server is a highly scalable and highly # available server built on a cluster of real servers, with the load # balancer running on Linux. # description: start LVS of DR-RIP LOCK=/var/lock/ipvsadm.lock VIP=192.168.16.100 . /etc/rc.d/init.d/functions start() { PID=`ifconfig | grep lo:10 | wc -l` if [ $PID -ne 0 ]; then echo "The LVS-DR-RIP Server is already running !" else /sbin/ifconfig lo:10 $VIP netmask 255.255.255.255 broadcast $VIP up /sbin/route add -host $VIP dev lo:10 echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce echo "1" >/proc/sys/net/ipv4/conf/eth0/arp_ignore echo "2" >/proc/sys/net/ipv4/conf/eth0/arp_announce echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce /bin/touch $LOCK echo "starting LVS-DR-RIP server is ok !" fi } stop() { /sbin/route del -host $VIP dev lo:10 /sbin/ifconfig lo:10 down >/dev/null echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce echo "0" >/proc/sys/net/ipv4/conf/eth0/arp_ignore echo "0" >/proc/sys/net/ipv4/conf/eth0/arp_announce echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignore echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce rm -rf $LOCK echo "stopping LVS-DR-RIP server is ok !" } status() { if [ -e $LOCK ]; then echo "The LVS-DR-RIP Server is already running !" else echo "The LVS-DR-RIP Server is not running !" fi } case "$1" in start) start ;; stop) stop ;; restart) stop start ;; status) status ;; *) echo "Usage: $1 {start|stop|restart|status}" exit 1 esac exit 0 Add execution permission # chmod +x /etc/init.d/lvs_rs Start service # chkconfig --add lvs_rs # chkconfig lvs_rs on # systemctl daemon-reload # systemctl start lvs_rs
node5 test
[root@node5 ~]# for ((i=1;i<=10;i++)); do curl 192.168.16.100; done this is a test2 this is a test1 this is a test3 this is a test2 this is a test1 this is a test3 this is a test2 this is a test1 this is a test3 this is a test2
Health check configuration
If a real server goes down, some users will be affected and cannot access the server, so configure health check to detect whether RS is down.
[root@node1 ~]# mkdir /scripts Edit health check script [root@node1 ~]# vim /scripts/check #!/bin/bash # function: # version:1.1 PORT="80" VIP=192.168.16.100 RIP=( 192.168.16.12 192.168.16.13 192.168.16.14 ) function check_url() { #for ((i=0; i<`echo ${#RIP[*]}`; i++)) for i in ${RIP[*]} do judge=($(curl -I -s http://${i}| awk 'NR==1')) #if [[ "${judge[1]}" == '200' && "${judge[2]}"=='OK' ]] if [ "${judge[1]}" == '200' -a "${judge[2]}"=='OK' ] then if [ `ipvsadm -Ln|grep "${i}"|wc -l` -ne 1 ] then ipvsadm -a -t $VIP:$PORT -r ${i}:$PORT fi else if [ `ipvsadm -Ln|grep "${i}"|wc -l` -eq 1 ] then ipvsadm -d -t $VIP:$PORT -r ${i}:$PORT fi fi done } while true do check_url sleep 5 done Add execution permission [root@node1 ~]# chmod +x /scripts/check.sh Background operation [root@node1 ~]# nohup /scripts/check.sh & Restart script [root@node1 ~]# systemctl start lvs_ds Dynamic monitoring LVS [root@node1 ~]# watch ipvsadm -Ln
Test:
[root@node2 ~]# systemctl stop httpd.service
LVS – NAT mode cluster construction
DS configuration: node1
Add a network card and set it to host only mode 1,Enable routing forwarding function Temporary settings: Method 1: [root@node1 ~]# echo 1 > /proc/sys/net/ipv4/ip_forward Method 2: [root@node1 ~]# sysctl -w net.ipv4.ip_forward=1 net.ipv4.ip_forward = 1 Permanent settings: [root@node1 ~]# vim /etc/sysctl.conf [root@node1 ~]# sysctl -p net.ipv4.ip_forward = 1 2,to configure LVS [root@node1 ~]# ipvsadm -A -t 192.168.136.137:80 -s rr [root@node1 ~]# ipvsadm -a -t 192.168.136.137:80 -r 192.168.16.13:80 -m [root@node1 ~]# ipvsadm -a -t 192.168.136.137:80 -r 192.168.16.12:80 -m [root@node1 ~]# ipvsadm -a -t 192.168.136.137:80 -r 192.168.16.14:80 -m
RS configuration: node2,node3,node4
Specify gateway to load scheduler node1
[root@node2 ~]# nmcli connection modify ens33 ipv4.gateway 192.168.16.11 [root@node2 ~]# nmcli connection up ens33 The connection was successfully activated( D-Bus Active path:/org/freedesktop/NetworkManager/ActiveConnection/6) [root@node3 ~]# nmcli connection modify ens33 ipv4.gateway 192.168.16.11 [root@node3 ~]# nmcli connection up ens33 The connection was successfully activated( D-Bus Active path:/org/freedesktop/NetworkManager/ActiveConnection/6) [root@node4 ~]# nmcli connection modify ens33 ipv4.gateway 192.168.16.11 [root@node4 ~]# nmcli connection up ens33 The connection was successfully activated( D-Bus Active path:/org/freedesktop/NetworkManager/ActiveConnection/6)
**Test: node5**
Add a network card and set it to host only mode [root@node5 ~]# for ((i=1;i<=10;i++)) > do > curl 192.168.136.137 > done this is a test3 this is a test1 this is a test2 this is a test3 this is a test1 this is a test2 this is a test3 this is a test1 this is a test2 this is a test3