1, Building traditional proxy server
1. Environment configuration
host | host name | operating system | IP address | Main software |
---|---|---|---|---|
Squid-Server | CentOS 7-5 | CentOS 7 | 192.168.126.15 | squid-3.5.28.tar.gz |
Web1 | CentOS 7-4 | CentOS 7 | 192.168.126.14 | httpd |
client | Win10 | Windows | 192.168.126.10 | / |
2. Construction steps
2.1 Squid-Server
vim /etc/squid.conf ...... http_access allow all http_access deny all http_port 3128 cache_effective_user squid cache_effective_group squid #Line 63, insert cache_mem 64 MB #Specify the size of the memory space used by the cache function to maintain the frequently accessed WEB objects. The capacity is preferably a multiple of 4, and the unit is MB. It is recommended to set it to 1 / 4 of the physical memory reply_body_max_size 10 MB #The maximum file size that users are allowed to download, in bytes. When downloading a Web object that exceeds the specified size, a prompt of "request or access too large" will appear on the error page of the browser. The default setting is 0, which means no restriction maximum_object_size 4096 KB #The maximum object size allowed to be saved to the cache space, in kilobytes. Files exceeding the size limit will not be cached, but will be forwarded directly to the user service squid restart or systemctl restart squid #Restart the service for the configuration to take effect netstat -natp | grep squid #Confirm whether the startup is successful
In the production environment, you also need to modify the firewall rules:
iptables -F iptables -I INPUT -p tcp --dport 3128 -j ACCEPT iptables -L INPUT
2.2 agent configuration of client
- Configure the IP address of the client (Win10)
- Open the browser and configure the proxy function
2.3 Web1
Note: This is the second Linux (Web1)
systemctl stop firewalld systemctl disable firewalld setenforce 0 yum -y install httpd #Installing apache httpd service with YUM systemctl start httpd #Turn on the service and confirm whether the port is started netstat -natp | grep 80
2.4 inspection
Enter the Web server IP address in the browser for access
View the new records of Web1 access log
tail -f /var/log/httpd/access_log
2, Build transparent proxy server
1. Environment configuration
host | host name | operating system | IP address | Main software |
---|---|---|---|---|
Squid-Server | CentOS 7-5 | CentOS 7 | ens33: 192.168.126.15,ens36:12.0.0.1 | squid-3.5.28.tar.gz |
Web1 | CentOS 7-4 | CentOS 7 | 12.0.0.12 | httpd |
client | Win10 | Windows | 192.168.126.10 | / |
2. Construction steps
2.1 Web1
systemctl stop firewalld systemctl disable firewalld setenforce 0 vim /etc/sysconfig/network-scripts/ifcfg-ens33 #Modify the IP, subnet mask and gateway in the network card configuration systemctl restart network #Note: after restart, the remote terminal will not be connected and needs to return to VMware ifconfig
yum -y install httpd systemctl restart httpd.service
2.2 Squid
#In the shutdown state, add a network card, and then turn it on again cd /etc/sysconfig/network-scripts/ cp ifcfg-ens33 ifcfg-ens36 vim ifcfg-ens33 vim ifcfg-ens36 #The specific configuration is shown below systemctl restart network ifconfig
Add the IP address that provides intranet services, and support the transparent proxy option transparent
#Line 60, modified vim /etc/squid.conf ...... http_access allow all http_access deny all http_port 192.168.184.15:3128 transparent systemctl restart squid netstat -anpt | grep "squid"
#Turn on routing forwarding #Realize the address forwarding of different network segments in the machine echo 'net.ipv4.ip_forward = 1' >> /etc/sysctl.conf sysctl -p
#Modify firewall rules iptables -F iptables -t nat -F iptables -t nat -I PREROUTING -i ens33 -s 192.168.126.0/24 -p tcp --dport 80 -j REDIRECT --to 3128 iptables -t nat -I PREROUTING -i ens33 -s 192.168.126.0/24 -p tcp --dport 443 -j REDIRECT --to 3128 iptables -I INPUT -p tcp --dport 3128 -j ACCEPT
2.3 client testing
- First, turn off the previously set proxy server function
- Modify the network card IP of proxy server ens33 as the default gateway
- Visit again http://12.0.0.12
2.4 viewing access logs
- View the new records of Squid access log
tail -f /usr/local/squid/var/logs/access.log #Squid proxy server can detect the record of client accessing the target website #You need to visit the client several times and then come back to see the new records
- Check the new records in the Web access log, which shows that the external network port of the proxy server replaces the access of the client
3, ACL access control
1. General
In the configuration file squid In conf, ACL access control is realized through the following two steps:
- Use acl configuration items to define the conditions to be controlled;
- Via http_ The access configuration item controls "allow" or "deny" access to the defined list
#Define access control lists #The usage format is as follows: acl [List name] [List type] [List content] [...] #Commonly used vim /etc/squid.conf ...... acl localhost src 192.168.126.15/32 #The source address is 192.168.126.15 acl MYLAN src 192.168.126.0/24 #Client network segment acl destionhost dst 192.168.126.14/32 #The destination address is 192.168.126.14 acl MC20 maxconn 20 #Maximum concurrent connections 20 acl PORT port 21 #Target port 21 acl DMBLOCK dstdomain .qq.com #Target domain, matching all sites in the domain acl BURL url_regex -i ^rtsp:// ^emule:// # URL s starting with rtsp: / /, eMule: /, - i means case is ignored acl PURL urlpath_regex -i \.mp3$ \.mp4$ \.rmvb$ #With mp3,. mp4,. URL path at the end of rmvb acl WORKTIME time MTWHF 08:30-17:30 #The time is from 8:30 to 17:30 from Monday to Friday, "MTWHF" is the English initials of each week
2. Environment configuration
host | host name | operating system | IP address | Main software |
---|---|---|---|---|
Squid-Server | CentOS 7-5 | CentOS 7 | ens33: 192.168.126.15,ens36:12.0.0.1 | squid-3.5.28.tar.gz |
Web1 | CentOS 7-4 | CentOS 7 | 192.168.126.14 | httpd |
Web2 | CentOS 7-3 | CentOS 7 | 192.168.126.13 | httpd |
client | Win10 | Windows | 192.168.126.10 | / |
3. Configuration steps
3.1 Squid-Server
vim /dest.list 192.168.126.14 vim /etc/squid.conf ...... acl destionhost dst "/dest.list" #Call the contents of the list in the specified file ...... http_access deny destionhost #Note that if it is a rejection list, it needs to be placed in http_access allow all http_port 3128 systemctl restart squid netstat -natp | grep "squid"
3.2 Web1 and Web2
#Note: remember to change the network card configuration of Web1 systemctl stop firewalld systemctl disable firewalld setenforce 0 yum -y install httpd systemctl start httpd
3.3 client access Browser Test
- When the browser accesses the IP address 192.168.126.15 of Web1, it shows that the access is denied
- The IP address of Web2 can be accessed normally, and the ACL takes effect
- If you restart the client's proxy and point to the IP address of Web1, all access will be denied
4, Squid log analysis
1. Install the image processing software package online
- About the use of online sources, you can check my blog, portal : configure online YUM source warehouse
- Relevant software package download, portal : sarg-2.3.7.tar.gz (extraction code: qwer) (download and transfer to / opt / directory)
yum install -y pcre-devel gd gd-devel mkdir /usr/local/sarg tar zxvf sarg-2.3.7.tar.gz -C /opt/ cd /opt/sarg-2.3.7 ./configure --prefix=/usr/local/sarg --sysconfdir=/etc/sarg --enable-extraprotection #Specify the installation directory, configuration file directory and enable security protection make -j 4 && make install
2. Modify the configuration file of sarg
vim /etc/sarg/sarg.conf #Line 7, uncomment access_log /usr/local/squid/var/logs/access.log #Specify access log file #Line 25, uncomment title "Squid User Access Reports" #Page title #Line 120, uncomment output_dir /var/www/html/squid-reports #Report output directory #Line 178, uncomment user_ip no #Display with user name #Line 184, uncomment, modify topuser_sort_field connect reverse #In top sorting, the specified connection times are arranged in descending order, and the ascending order is normal #Line 190, uncomment, modify user_sort_field connect reverse #For user access records, the number of connections is sorted in descending order #Line 206, uncomment, modify exclude_hosts /usr/local/sarg/noreport #Specifies files that are not included in the sorted site list #Line 257, uncomment overwrite_report no #Overwrite logs with the same name and date #Line 289, uncomment, modify mail_utility mailq.postfix #Send mail report command #Line 434, uncomment, modify charset UTF-8 #Specifies the character set UTF-8 #Line 518, uncomment weekdays 0-6 #Week cycle of top ranking #Line 525, uncomment hours 0-23 #Time period of top ranking #Line 633, uncomment www_document_root /var/www/html #Specify page root
3. Start a record
touch /usr/local/sarg/noreport #If you create a file that is not included in the site, the added domain name will not be displayed in the sorting ln -s /usr/local/sarg/bin/sarg /usr/local/bin/ #Create soft links to facilitate system identification sarg --help #Get options help sarg #Start a record
4. The client accesses the web page for testing
yum -y install httpd systemctl start httpd
- Client open browser access http://192.168.126.15/squid-reports , view the sarg report page
5. Add scheduled tasks to generate reports every day
vim /usr/local/sarg/report.sh #/bin/bash #Get current date TODAY=$(date +%d/%m/%Y) #Get one week ago today YESTERDAY=$(date -d "1 day ago" +%d/%m/%Y) /usr/local/sarg/bin/sarg -l /usr/local/squid/var/logs/access.log -o /var/www/html/squid-reports -z -d $YESTERDAY-$TODAY &> /dev/null exit 0
chmod +x /usr/local/sarg/report.sh #Empowerment crontab -e 30 3 * * * /usr/local/sarg/report.sh #Write a scheduled task and execute the script at 3:30 a.m. every day
5, Squid reverse proxy
1. General
- If the requested resource is cached in the Squid reverse proxy server, the requested resource is returned directly to the client
- Otherwise, the reverse proxy server will request resources from the background Web server, and then return the requested response to the client. At the same time, it will also cache the response locally for the next requester to use
2. Working mechanism
- Cache web page objects to reduce duplicate requests
- Assign the Internet request to the intranet Web server in rotation or by weight
- Proxy user requests to avoid users directly accessing the Web server and improve security
3. Environment configuration
host | host name | operating system | IP address | Main software |
---|---|---|---|---|
Squid-Server | CentOS 7-5 | CentOS 7 | 192.168.126.15 | squid-3.5.28.tar.gz |
Web1 | CentOS 7-4 | CentOS 7 | 192.168.126.14 | httpd |
Web2 | CentOS 7-3 | CentOS 7 | 192.168.126.13 | httpd |
client | Win10 | Windows | 192.168.126.10 | / |
4. Configuration steps
4.1 Squid-Server
vim /etc/squid.conf ...... #Line 60, insert http_port 192.168.126.15:80 accel vhost vport cache_peer 192.168.126.14 parent 80 0 no-query originserver round-robin max_conn=30 weight=1 name=web1 cache_peer 192.168.126.13 parent 80 0 no-query originserver round-robin max_conn=30 weight=1 name=web2 cache_peer_domain web1 web2 www.xcf.com
Detailed explanation:
http_port 80 accel vhost vport #Squid has changed from a cache to a web server, using the acceleration mode. At this time, squid listens to requests on port 80 and binds to the request port (vhost vport) of the web server #At this time, the request comes to squid. Squid does not need to forward the request, but directly takes data from the cache or directly requests data from the bound port
parameter | end |
---|---|
parent | Represents the parent node |
80 | HTTP_PORT |
0 | ICP_PORT |
no-query | Do not query, directly obtain data |
originserver | Do not query, directly obtain data |
round-robin | Specify squid to distribute requests to one of the parent nodes by polling |
max_conn | Specify the maximum number of connections |
weight | Specify weights |
name | Set alias |
systemctl stop httpd #Turn off the httpd service first to prevent conflicts systemctl restart squid #Restart replication for configuration to take effect
4.2 Web1
systemctl stop firewalld.service setenforce 0 yum install -y httpd systemctl start httpd echo "hello world~" >> /var/www/html/index.html
4.3 Web2
systemctl stop firewalld.service setenforce 0 yum install -y httpd systemctl start httpd echo "bye bye~" >> /var/www/html/index.html
4.4 client
modify C:\Windows\System32\drivers\etc\hosts file 192.168.126.15 www.xcf.com Then modify the proxy server Address: 192.168.126.15 Port: 80 Save and exit Browser access http://www.xcf.com test