Advanced usage and configuration of nginx
1, Advanced features
1. nginx cross domain
Add in http {server {...}}
#Domains that allow cross domain requests, * represents all add_header 'Access-Control-Allow-Origin' *; #Allow cookie requests add_header 'Access-Control-Allow-Credentials' 'true'; #Methods that allow requests, such as GET/POST/PUT/DELETE add_header 'Access-Control-Allow-Methods' *; #Allow requested header s add_header 'Access-Control-Allow-Headers' *;
2. nginx anti-theft chain
Add in http {server {...}}
#Validate source site valid_referers *.imooc.com; #Illegal introduction will enter the judgment below if ($invalid_referer) { return 404; }
3. nginx compressed static file
Add in http {...} module
#Open gzip to reduce the amount of data we send gzip on; #Compression starts after more than 1k gzip_min_length 1k; #Four memory units of 16k are used as compressed result stream cache gzip_buffers 4 16k; #gzip compression ratio can be set from 1 to 9. 1 has the smallest compression ratio and the fastest speed, while 9 has the largest compression ratio and the slowest speed, consuming CPU gzip_comp_level 5; #Type of compression gzip_types application/javascript text/plain text/css application/json application/xml text/javascript; #For proxy servers, some browsers support compression and some do not, so waste is avoided, and unsupported ones are also compressed. Therefore, judge whether compression is required according to the HTTP header of the client gzip_vary on; #Disable gzip compression below IE6. Some versions of ie do not support gzip compression very well gzip_disable "MSIE [1-6].";
4. Configure https
- Obtain ssl certificate information from the corresponding domain name registration service providers (Alibaba cloud, Tencent cloud, etc.)
- Obtain nginx related certificates from the downloaded ssl certificates (including two files:. key and. pom)
- Upload the certificate corresponding to / NgR / INX to the specific location of / INX service (e.g. / INX)
- Configure nginx (http default port: 80, https default port: 443)
server { listen 443; server_name www.test.com; # Enable ssl ssl on; # Configure ssl certificate ssl_certificate /usr/local/nginx/cert/214806751670884.pem; # Configure certificate key ssl_certificate_key /usr/local/nginx/cert/214806751670884.key; # ssl session cache ssl_session_cache shared:SSL:1m; # ssl session timeout ssl_session_timeout 5m; # Configure the encryption suite and write it in accordance with the openssl standard ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE:ECDH:AES:HIGH:!NULL:!aNULL:!MD5:!ADH:!RC4; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; location ~* \.(txt)$ { root public; } location / { proxy_pass http://127.0.0.1:8011; } }
If you find that there is no ssl module when you start nginx after the above operations, it is because there is no ssl module when installing nginx. You can use the command to install it
Enter the nginx installation package directory and execute the following commands in turn
./configure \ --prefix=/usr/local/nginx \ --pid-path=/var/run/nginx/nginx.pid \ --lock-path=/var/lock/nginx.lock \ --error-log-path=/var/log/nginx/error.log \ --http-log-path=/var/log/nginx/access.log \ --with-http_gzip_static_module \ --http-client-body-temp-path=/var/temp/nginx/client \ --http-proxy-temp-path=/var/temp/nginx/proxy \ --http-fastcgi-temp-path=/var/temp/nginx/fastcgi \ --http-uwsgi-temp-path=/var/temp/nginx/uwsgi \ --http-scgi-temp-path=/var/temp/nginx/scgi \ --with-http_ssl_module
make && make install
2, Load balancing
1. Basic configuration
upstream
nginx is mainly used for seven layer load balancing
Add in http {...}
# One worker process is set to facilitate the test and observation of the number of successful connections worker_processes 1; upstream tomcats { server 192.168.1.173:8080; server 192.168.1.174:8080; server 192.168.1.175:8080; } # Listening address server { listen 80; server_name www.test.com; location / { # Point to the upstream configured above proxy_pass http://tomcats; } }
2. Relevant instructions under upstream
weight
Weight: set the weight information requested by each server during user access. The default value is 1,
upstream tomcats { server 192.168.1.173:8080 weight=2; server 192.168.1.174:8080 weight=4; server 192.168.1.175:8080 weight=5; }
max_conns
Limit the number of connections of each server to prevent overload, which can limit the current
# One worker process is set to facilitate the test and observation of the number of successful connections worker_processes 1; upstream tomcats { server 192.168.1.173:8080 max_conns=2; server 192.168.1.174:8080 max_conns=2; server 192.168.1.175:8080 max_conns=2; }
slow_start
Only the commercial version can be used
Only two or more servers can be used
The weight information must be configured before it can be used
Set the time for the server weight to recover from 0 to the standard value. The default value is 0. Slow recovery
upstream tomcats { server 192.168.1.173:8080 weight=2 slow_start = 60s; server 192.168.1.174:8080; server 192.168.1.175:8080 weight=5; }
}
down
The marked service node is not available
upstream tomcats { # Not available server 192.168.1.173:8080 down; server 192.168.1.174:8080 weight=1; server 192.168.1.175:8080 weight=1; }
backup
Indicates that the server node is a standby machine. It can be accessed only after other servers are unavailable (such as all downtime)
upstream tomcats { # Standby machine server 192.168.1.173:8080 backup; server 192.168.1.174:8080 weight=1; server 192.168.1.175:8080 weight=1; }
max_fails ,fail_timeout
The two instructions must be used together
- max_ Failures means that after several failures, the server node is marked as down and the upstream servers of the cluster are excluded
- fail_timeout means Max_ After failures is marked as down, how long will it take to try to request the server node? If it still fails, repeat the above operation
upstream tomcats { server 192.168.1.173:8080 max_fails=2 fail_timeout=15s; server 192.168.1.174:8080 weight=1; server 192.168.1.175:8080 weight=1; }
3,keepalive
keepalive
Set the number of long connection processing
proxy_http_version
Set the version information of long connection HTTP, proxy_http_version 1.1;
proxy_set_header
Clear connection header information
upstream tomcats { server 192.168.1.190:8080; keepalive 32; } server { listen 80; server_name www.tomcats.com; location / { proxy_pass http://tomcats; proxy_http_version 1.1; proxy_set_header Connection ""; } }
4,ip_hash
ip_hash can ensure that the user can request to the fixed server in the upstream service, provided that the user IP has not changed
-
Note: use ip_ After hash, if you want to disable a server node, you can't directly remove the background server. You must use down, otherwise the hash algorithm will fail and recalculate (192.0.1 is calculated through the first three digits of ip), which will affect users' use
-
Official documents: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#ip_hash
upstream tomcats { ip_hash; server 192.168.1.173:8080; server 192.168.1.174:8080 down; server 192.168.1.175:8080; }
5,url_hash,least_conn
According to the url address requested by the user, access the fixed server node after hash ing
upstream tomcats { # url hash hash $request_uri; # Minimum number of connections # least_conn server 192.168.1.173:8080; server 192.168.1.174:8080; server 192.168.1.175:8080; } server { listen 80; server_name www.tomcats.com; location / { proxy_pass http://tomcats; } }
3, nginx cache control
expires
The expires command controls the browser cache, mainly for some static resources
1. Browser cache
- The browser cache is cached locally by the user
- Used to accelerate user access and improve the single user experience (browser visitors)
2. nginx cache
- Nginx cache is cached on the nginx side
- Improve the user experience of all users accessing nginx
- Speed up upstream server nodes
- User access will still generate request traffic
3. nginx cache control
- Control browser cache
Configure in specific location
location /files { alias /home/imooc; # Allow the browser to cache the resource for 10s # expires 10s; # @Indicates that the cache expires after a specified point in time # expires @22h30m; # It has expired 1h before and is not cached # expires -1h; # There is a cache, but no cache is used # expires epoch; # Turn off the cache, which is off by default # expires off; # Maximum time, never expire expires max; }
- Control upstream server node cache
You need to configure the basic information of the cache in http first, and then configure it in the corresponding location
# proxy_cache_path set cache directory # keys_zone set the shared memory and occupied space # max_size set cache size # inactive is cleared after this time # use_temp_path temporary directory, which will affect nginx performance after use proxy_cache_path /usr/local/nginx/upstream_cache keys_zone=mycache:5m max_size=1g inactive=1m use_temp_path=off;
location / { proxy_pass http://tomcats; # Enable caching, and keys_zone consistent proxy_cache mycache; # For 200 and 304 status codes, the cache time is 8 hours proxy_cache_valid 200 304 8h; }