nginx speed of light beginner to advanced

1. Basic concepts of nginx

(1) What is nginx and what to do

High performance, high concurrency (up to 50000), less memory and performance optimization

2. nginx installation, common commands and configuration files

    (1)stay linux Installed in the system nginx
/usr/src: System level source directory.
/usr/local/src: User level source directory
#Install related dependencies, and install compilation tools and library files
    yum -y install make zlib zlib-devel gcc-c++ libtool  openssl openssl-devel
#
#Install PCRE dependency. The function of PCRE is to enable Nginx to support Rewrite function.
    wget http://downloads.sourceforge.net/project/pcre/pcre/8.35/pcre-8.35.tar.gz
#Unzip the installation package:
    tar zxvf pcre-8.35.tar.gz
#Enter the installation package directory
[root@bogon src]# cd pcre-8.35
#install
[root@bogon pcre-8.35]# ./configure
[root@bogon pcre-8.35]# make && make install
5,see pcre edition
[root@bogon pcre-8.35]# pcre-config --version
install Nginx
1,download Nginx,Download address: https://nginx.org/en/download.html
[root@bogon src]# cd /usr/local/src/
[root@bogon src]# wget http://nginx.org/download/nginx-1.12.2.tar.gz
2,Unzip the installation package
[root@bogon src]# tar zxvf nginx-1.12.2.tar.gz
3,Enter the installation package directory
[root@bogon src]# cd nginx-1.12.2
4,Compile and install (you can specify the installation path or use the default)
[root@bogon nginx-1.6.2]# ./configure --prefix=/usr/local/webserver/nginx --with-http_stub_status_module --with-http_ssl_module --with-pcre=/usr/local/src/pcre-8.35
[root@bogon nginx-1.6.2]# make
[root@bogon nginx-1.6.2]# make install
5,see nginx edition
[root@bogon nginx-1.6.2]# /usr/local/webserver/nginx/sbin/nginx -v

6. After the installation is successful, there is a startup script in the path of / usr/local/nginx/sbin

[root@localhost nginx-1.12.2]# cd /usr/local/nginx/
[root@localhost nginx]# ls
conf  html  logs  sbin
[root@localhost nginx]# ^C
[root@localhost nginx]# cd sbin/
[root@localhost sbin]# ls
nginx  nginx.old
[root@localhost sbin]#
#View open port numbers
firewall-cmd --list-all
#Set open service and port number
firewall-cmd --add-service=http --permanent
firewall-cmd --add-port=80/tcp --permanent

(2) nginx common commands

(3) nginx configuration file

The nginx configuration file consists of three parts

1. Global block

From configuration file to events Content between blocks,
Some effects will be set nginx Configuration instructions for the overall operation of the server:

It mainly includes configuration file operation nginx User (Group) of the server, allowed generated worker process Number, process PID Storage path, log storage path and type, and the introduction of configuration files

This is the key configuration of nginx server concurrent processing service_ The larger the processes value, the more concurrent processing can be supported, but it will be restricted by hardware, software and other devices.

2events block

events The instructions involved in the block mainly affect nginx Network connection between server and user
 such as worker_connections 1024;    Maximum number of connections supported.

3.http block

nginx The most frequent part of the service configuration is the configuration of most functions such as proxy, cache and log definitions and third-party modules.

Note: http blocks can also include HTTP global blocks and server blocks

Configure forward and reverse proxy

Forward proxy: configure a proxy server on the client (browser) to access the Internet through the proxy server.

Reverse proxy: the client only needs to send the request to the reverse proxy server, and the reverse proxy server selects the target server to obtain the data and return it to the client.

3. nginx configuration instance 1 - reverse proxy

1,Realization effect
(1)Open the browser and enter the address in the browser address bar xxxxxx(own ip),Jump to linux system tomcat In the main page
2,preparation
(1)stay Linux Installed in the system tomcat
wget http://archive.apache.org/dist/tomcat/tomcat-7/v7.0.70/bin/apache-tomcat-7.0.70.tar.gz
 Then unzip tar -zxvf +Compressed package name
 start-up tomcat Server (in) Linux In the system, tomcat Use default port 8080)
Open port for external access
firewall-cmd --add-port=8080/tcp --permanent
firewall-cmd-reload    service iptables restart 
View the open port number
firewall-cmd --list-all

Start tomcat server

View log files after startup

Access tomcat server in browser

Reverse proxy case 1:

Analysis of access process

Specific configuration

  1. Configure the corresponding relationship between domain name and ip in the host file of Windows system
  1. Add 192.168.171.131 www.123 at the end Com (set according to the ip of your server)

2. Configure request forwarding in nginx (reverse proxy configuration) / usr / local / nginx / conf / nging conf

Restart nginx (I'm here to stop it at startup)

Input http://www.123.com/ To access, pay attention not to add a port number. By default, use port number 80 to access nginx, and then jump to tomcat server to achieve the effect of reverse proxy.

Reverse agency case 2:

Objectives:
use nginx The reverse proxy jumps to services on different ports according to the access path, nginx The listening port is 9001
 visit http://192.168.171.131:9001/edu / jump directly to 127.0.0.1:8080
 visit http://192.168.171.131:9001/vod / jump directly to 127.0.0.1:8081
2,preparation
(1)Prepare two tomcat Server, one 8080 port, one 8081 port
(2)Create folders and test pages
3,Specific configuration
 modify nginx The configuration file is as follows:

(2) Port number for open access: 9001 8080 8081

Close nginx on

​4. test

The above cases:

Client sends request to nginx Reverse proxy server, nginx The reverse proxy sends the request of the client to the target server to obtain data, and the response of the target server is forwarded back to the proxy server and then to the client.
location Instruction description:
This instruction is used for matching URL. 

3. nginx load balancing configuration

A single server cannot solve the problem. We increase the number of servers, and then distribute the requests to the servers. Instead of concentrating the original requests on a single server, we distribute the requests to multiple servers and distribute the load to different servers, which is what we call load balancing.

nginx configuration instance 2 - load balancing

1.Realization effect
(1)Enter the address in the browser address bar http://192.168.171.131:9001/edu/a.html, load balancing effect, average 8080 and 8081 ports
2.preparation
(1)Prepare two tomcat Server, one 8080, one 8081
(2)In two tomcat inside webapps In the directory, the creation name is edu Folder, in edu Create page in file a.html For testing.
3.stay nginx Load balancing configuration in the configuration file of( http Block add (modify)

8081 - server refresh (once)

4. nginx server allocation strategy

1).Polling (default)
Each request is allocated to different back-end servers one by one in chronological order. If the back-end server down It can be removed automatically.
2),weight
weight Represents the weight, which is 1 by default. The higher the weight, the more clients are assigned.
Specify the polling probability, weight It is directly proportional to the access rate, which is used in the case of uneven performance of back-end servers.

​3),ip_hash

Each request is allocated according to the hash result of the access ip, so that each visitor can access a back-end server regularly, which can solve the session problem

4. fair (third party)

Requests are allocated according to the response time of the back-end server, and those with short response time are allocated first.

nginx dynamic and static separation

The purpose is to speed up the parsing speed of the website. Dynamic pages and static pages can be parsed by different servers to speed up the parsing speed. Reduce the pressure on the original single server.

1. Preparation

1) Prepare static resources in Linux system for access

2. Modify nginx configuration file

Restart nginx

3. Test:

(1) Enter the address in the browser

Add port and access name

6. nginx configuring high availability clusters

Preparation for configuring high availability:

(1)Two servers are required 192.168.171.131 And 192.168.171.129
(2)Install on two servers nginx
(3)Install on two servers keepalived
 use yum Command to install
yum install keepalived -y

#After installation, generate the directory keepalived in etc.

Complete high availability configuration (master-slave configuration)

(1)/etc/keepalived/keepalivec.conf configuration file

Main service

#You can directly copy and replace the contents of the source file

global_defs {    #Global definition
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.171.131    #Own server IP
smtp_connect_timeout 30
router_id LVS_DEVEL    #Unique non repetition
}
#Script configuration
vrrp_script chk_http_port {
script "/usr/local/nginx/nginx_check.sh"    #Path and name of detection script
interval 2 #(detect the interval between script execution)
weight -20    #Weight. Set the weight of the current server. The configuration description here: if the current server goes down, the weight of the server will be reduced by 20
}
#Virtual IP configuration
vrrp_instance VI_1 {
state BACKUP     #Write MASTER for the primary server and BACKUP for the BACKUP server
interface ens33 #network card
virtual_router_id 51 # Virtual of primary and standby machines_ router_ ID must be the same
priority 100     #The primary and standby machines have different priorities. The host value is larger and the backup machine value is smaller
advert_int 1    #Time interval. Send a heartbeat every how many seconds to detect whether the server is still alive. By default, send a heartbeat every 1 second
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.171.50 // VRRP H virtual IP address. The network segment should be consistent with that of linux. Multiple virtual IPS can be bound
} }

From service

You can directly copy and replace the source file

global_defs {    #Global definition
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.171.129    #Own server IP
smtp_connect_timeout 30
router_id LVS_DEVEL    #Unique non repetition
}
#Script configuration
vrrp_script chk_http_port {
script "/usr/local/nginx/nginx_check.sh"    #Path and name of detection script
interval 2 #(detect the interval between script execution)
weight -20    #Weight. Set the weight of the current server. The configuration description here: if the current server goes down, the weight of the server will be reduced by 20
}
#Virtual IP configuration
vrrp_instance VI_1 {
state BACKUP     #Write MASTER for the primary server and BACKUP for the BACKUP server
interface ens33 #network card
virtual_router_id 51 # Virtual of primary and standby machines_ router_ ID must be the same
priority 90     #The primary and standby machines have different priorities. The host value is larger and the backup machine value is smaller
advert_int 1    #Time interval. Send a heartbeat every how many seconds to detect whether the server is still alive. By default, send a heartbeat every 1 second
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.171.50 // VRRP H virtual IP address. The network segment should be consistent with that of linux. Multiple virtual IPS can be bound
} }

(2) Add script in / usr/local/src (both slave and master are the same)

#!/bin/bash
A=`ps -C nginx -no-header |wc -l`
if [ $A -eq 0 ];then
        /usr/local/nginx/sbin/nginx
        sleep 2
        if [ `ps -C nginx -no-header |wc -l` -eq 0 ];then
                killall keepalived
        fi
fi

Start nginx and keepalived services of master-slave service

[root@localhost sbin]# ./nginx -s stop
[root@localhost sbin]# ./nginx
[root@localhost sbin]# systemctl start keepalived.service
[root@localhost sbin]# ps -ef |grep keepalived
root     110602      1  0 15:46 ?        00:00:00 /usr/sbin/keepalived -D
root     110603 110602  0 15:46 ?        00:00:00 /usr/sbin/keepalived -D
root     110604 110602  0 15:46 ?        00:00:00 /usr/sbin/keepalived -D
root     110657   7510  0 15:50 pts/0    00:00:00 grep --color=auto keepalived
[root@localhost sbin]#

Final test

1. Enter the virtual IP address 192.168.171.50 in the browser address bar

2. Turn off the nginx service and keeplived service of the main server and visit again - success.

nginx principle

  1. Master & worker - (management and work)

2. How does the worker work?

3. What are the benefits of one master and multiple woker s?

1) You can use nginx -s reload for hot deployment and nginx for hot deployment

2) Each woker is an independent process. If there is a problem with one woker, the other worker s are independent,

Continue to compete and realize the request process without causing service interruption.

4. How many worker s need to be set

It is most appropriate that the number of worker s is equal to the number of CPUs on the server

5. Number of connections worker_connection

First: how many connections does the worker use to send the request?

2 or 4

Second: nginx has a master and four wokers. Each woker supports a maximum of 1024 connection data. What is the maximum number of concurrency supported?

Formula:

Tags: Nginx

Posted by mamavi on Sat, 16 Apr 2022 10:10:44 +0930