Nginx installation configuration startup applicable to CentOS

Installation under Linux

Declaration: Based on CentOS 7 system.

Nginx dependent package

Module dependency Nginx needs to rely on the following three packages

  1. ssl function requires openssl Library ( Click download )
  2. gzip module requires zlib Library ( Click download )
  3. rewrite module requires pcre Library ( Click download )

The installation order of dependent packages is: openssl, zlib, pcre, and finally install Nginx package.

Installation tutorial (source code installation)

step 1: download the required package

 
openssl-fips-2.0.2.tar.gz
zlib-1.2.7.tar.gz
pcre-8.21.tar.gz
nginx-1.12.2.tar.gz

step 2: install OpenSSL

 
[root@localhost wcw]# tar -zxvf openssl-fips-2.0.2.tar.gz 
[root@localhost wcw]# cd openssl-fips-2.0.2
[root@localhost openssl-fips-2.0.2]# ./config 
[root@localhost openssl-fips-2.0.2]# make && make install

step 3: install zlib

 
[root@localhost wcw]# tar -zxvf zlib-1.2.7.tar.gz
[root@localhost wcw]# cd zlib-1.2.7
[root@localhost zlib-1.2.7]# ./configure 
[root@localhost zlib-1.2.7]# make && make install

step 4: install pcre

 
[root@localhost wcw]# tar -zxvf pcre-8.21.tar.gz
[root@localhost wcw]# cd pcre-8.21
[root@localhost pcre-8.21]# ./configure 
[root@localhost pcre-8.21]# make && make install

step 5: install Nginx

 
[root@localhost wcw]# tar -zxvf nginx-1.12.2.tar.gz 
[root@localhost wcw]# cd nginx-1.12.2
[root@localhost nginx-1.12.2]# ./configure --prefix=/usr/install/nginx --with-pcre=../pcre-8.21 --with-zlib=../zlib-1.2.7 --with-openssl=../openssl-fips-2.0.2
[root@localhost nginx-1.12.2]# make && make install

Please note that the value of "--with xxx=" is the decompression directory, not the installation directory!

Nginx Linux basic operating instructions

     
Start service: nginx
 Exit Service: nginx -s quit
 Force service shutdown: nginx -s stop
 Reload service: nginx -s reload  (Reload the service configuration file, which is similar to restarting, but the service will not stop)
Validate profile: nginx -t
 Use profile: nginx -c "Configuration file path"
Use help: nginx -h
   

At this point, you can add environment variables to Nginx to operate the service. ( >>How to add Linux environment variables?)

Check whether the installation is successful:

 
[root@localhost wcw]# nginx -t

The following prompt appears, indicating that the installation is successful.

Or, enter "127.0.0.1" in the browser address and press enter to display the following page, which indicates that the installation is successful.

 

Nginx configuration file description

The three core functions most used in the project are static server, reverse proxy and load balancing.

The use of these three different functions is closely related to the configuration of Nginx. The configuration information of Nginx server is mainly concentrated in the configuration file "nginx.conf", and all configurable options are roughly divided into the following parts

     
main                                # Global configuration

events {                            # Working mode configuration

}

http {                              # http settings
    ....

    server {                        # Server host configuration (virtual host, reverse proxy, etc.)
        ....
        location {                  # Routing configuration (virtual directory, etc.)
            ....
        }

        location path {
            ....
        }

        location otherpath {
            ....
        }
    }

    server {
        ....

        location {
            ....
        }
    }

    upstream name {                  # Load balancing configuration
        ....
    }
}
   

main module

  • User is used to specify the running user and user group of nginx worker process. The default is nobody account
  • worker_processes specifies the number of sub processes to be started by nginx. During operation, monitor the memory consumption of each process (generally ranging from a few m to tens of M) and adjust it according to the actual situation. Usually, the number is an integer multiple of the number of CPU cores
  • error_log defines the location and output level of the error log file [debug / info / notice / warn / error / crit]
  • pid is used to specify the location of the storage file of the process id
  • worker_rlimit_nofile is used to specify the description of the maximum number of files that a process can open
  • ...

event module

  • worker_connections specifies the maximum number of connections that can be received at the same time. It must be noted here that the maximum number of connections is determined jointly with worker processes.
  • multi_ The accept configuration specifies that nginx accepts as many connections as possible after receiving a new connection notification
  • The use epoll configuration specifies the method of thread polling. If it is linux2.6+, use epoll. If it is BSD such as Mac, use Kqueue
  • ...

http module

As a web server, http module is the core module of nginx, and there are many configuration items. Many actual business scenarios will be set in the project, which need to be properly configured according to the hardware information.

1) Basic configuration

     
sendfile on: to configure on Give Way sendfile Play a role, and leave the file writeback process to the data buffer to complete, rather than putting it in the application. In this way, it is beneficial to improve the performance
tcp_nopush on: Give Way nginx Send all header files in one packet instead of sending them individually
tcp_nodelay on: Give Way nginx Don't cache data, but send it piece by piece. If the data transmission has real-time requirements, you can configure it. After sending a small piece of data, you can get the return value immediately, but don't abuse it

keepalive_timeout 10: Assign a connection timeout time to the client, after which the server will close the connection. Generally, the setting time is short, so that nginx Better work continuity
client_header_timeout 10: Set the timeout of the request header
client_body_timeout 10:Set the timeout of the request body
send_timeout 10: Specify the client response timeout. If the interval between two operations of the client exceeds this time, the server will close this link

limit_conn_zone $binary_remote_addr zone=addr:5m : Settings are used to save various key Parameters of shared memory,
limit_conn addr 100: Given key Set the maximum number of connections

server_tokens: Although it won't let nginx Faster execution, but can be closed on the error page nginx Version tips are good for improving the security of the website
include /etc/nginx/mime.types: Specifies the directive to include another file in the current file
default_type application/octet-stream: Specifies that the file type processed by default can be binary
type_hash_max_size 2048: Confusing data will affect the three column conflict rate. The larger the value, the more memory will be consumed, and hashing key The conflict rate will be reduced and the retrieval speed will be faster; Smaller value key,Occupy less memory, the higher the collision rate, the slower the retrieval speed
   

2) Log configuration

 
access_log logs/access.log: Set up a log to store access records
error_log logs/error.log: Set the log of storage logging errors

3) SSL certificate configuration

 
ssl_protocols: The instruction is used to start a specific encryption protocol, nginx At 1.1.13 And 1.0.12 After version, the default is ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2,TLSv1.1 And TLSv1.2 To ensure that OpenSSL >= 1.0.1 ,SSLv3 There are still many places in use, but there are many vulnerabilities that have been attacked.
ssl prefer server ciphers: When setting the negotiation encryption algorithm, we prefer to use the encryption suite of our server rather than the encryption suite of the client browser

4) Compression configuration

     
gzip Yes, tell nginx use gzip Send data in compressed form. This will reduce the amount of data we send.
gzip_disable Disable for the specified client gzip Functions. We set it to IE6 Or lower version to make our solution widely compatible.
gzip_static tell nginx Before compressing resources, check whether there is a pre gzip Processed resources. This requires you to pre compress your file (commented out in this example), allowing you to use the highest compression ratio, so nginx You don't have to compress these files anymore (want more details gzip_static For information, please click here).
gzip_proxied Allow or prohibit compression of response flows based on requests and responses. We set to any,This means that all requests will be compressed.
gzip_min_length Sets the minimum number of bytes to enable compression of data. If a request is less than 1000 bytes, we'd better not compress it, because compressing these small data will reduce the speed of all processes processing this request.
gzip_comp_level Set the compression level of data. This level can be 1-9 Of any number between, 9 is the slowest but has the largest compression ratio. We set it to 4, which is a compromise setting.
gzip_type Set the data format that needs to be compressed. There are already some in the above examples, and you can add more formats.
   

5) File cache configuration

   
open_file_cache When you open the cache, you also specify the maximum number of caches and the cache time. We can set a relatively high maximum time so that we can clear them after they are inactive for more than 20 seconds.
open_file_cache_valid stay open_file_cache Specify the interval between detecting correct information in.
open_file_cache_min_uses Defined open_file_cache The minimum number of files during the inactivity time of the instruction parameter in.
open_file_cache_errors Specifies whether to cache error messages when searching for a file, including adding files to the configuration again. We also include server modules, which are defined in different files. If your server module is not in these locations, you have to modify this line to specify the correct location.
 

sever module

srever module configuration is a sub module of http module, which is used to define the configuration information of a virtual access host, that is, a virtual server.

     
server {
    listen         80;
    server_name    localhost  192.168.1.100;
    charset        utf-8;
    access_log     logs/access.log;
    error_log      logs/error.log;
    ......
}
   
  • server: configuration of a virtual host. Multiple servers can be configured in one http
  • server_name: used to specify ip address or domain name. Multiple configurations are separated by spaces
  • charset: used to set the default encoding format of web pages configured in www/ path
  • access_log: used to specify the storage path of access records in the virtual host server
  • error_log: used to specify the storage path of the access error log in the virtual host server

location module

The location module is the most common configuration in Nginx configuration, which is mainly used to configure routing access information.

The routing access information configuration is related to various functions such as reverse proxy, load balancing, etc., so the location module is also a very important configuration module.

1) Basic configuration

 
location / {
    root    /nginx/www;
    index    index.php index.html index.htm;
}
  • location /: indicates the matching access root directory
  • Root: used to specify the web directory of the virtual host when accessing the root directory
  • index: the list of resource files displayed by default when the access to specific resources is not specified

2) Reverse proxy configuration

Through the reverse proxy server access mode, through proxy_set configuration makes client access transparent.

   
location / {
    proxy_pass http://localhost:8888;
    proxy_set_header X-real-ip $remote_addr;
    proxy_set_header Host $http_host;
}
 

3) uwsgi configuration

 
location / {
    include uwsgi_params;
    uwsgi_pass localhost:8888;
}

Load balancing module (upstream)

The upstream module is mainly responsible for the configuration of load balancing, and distributes requests to the back-end server through the default polling scheduling method. The simple configuration method is as follows.

     
upstream name {
    ip_hash;
    server 192.168.1.100:8000 weight=9;
    server 192.168.1.100:8001 down;
    server 192.168.1.100:8002 max_fails=3;
    server 192.168.1.100:8003 fail_timeout=20s;
    server 192.168.1.100:8004 max_fails=3 fail_timeout=20s;
}
   
  • ip_hash: Specifies the request scheduling algorithm. The default is weight weight polling scheduling, which can be specified
  • server host:port: list configuration of distributor
  • --down: indicates that the host is out of service
  • -- max_ Failures: indicates the maximum number of failures. If the maximum number of failures is exceeded, the service will be suspended
  • -- fail_timeout: indicates that if the request acceptance fails, the request will be restarted after the specified time is suspended

Main configuration of Nginx

Static Http server configuration

First of all, Nginx is an HTTP server, which can present static files (such as HTML and pictures) on the server to the client through the HTTP protocol.
to configure:

     
server {
    listen 80;   # port
    server_name localhost  192.168.1.100;   # domain name   
    location / {             # This is the root directory of the project
        root /usr/share/nginx/www;   # Virtual directory
    }
}
   

Reverse proxy server configuration

What is reverse proxy?
The client can access a website application server directly through the HTTP protocol. If the website administrator adds an Nginx in the middle, the client requests Nginx, and Nginx requests the application server, and then returns the result to the client. At this time, Nginx is the reverse proxy server.

Reverse proxy configuration:

     
server {
    listen 80;
    location / {
        proxy_pass http://192.168.0.112:8080;  # Application Server HTTP address
    }
}
   

Since the server can be accessed directly through HTTP, why add a reverse proxy in the middle? Isn't it unnecessary? What is the role of reverse proxy? Continue to look down. The following load balancing and virtual hosts are all based on reverse proxy. Of course, the functions of reverse proxy are not just these.

Load balancing configuration

When the number of visits to the website is very large, it is also a problem. Because the website is getting slower and slower, one server is not enough. Therefore, the same application is deployed on multiple servers, and the requests of a large number of users are allocated to multiple machines for processing. At the same time, the advantage is that if one of the servers hangs, as long as there are other servers running normally, it will not affect users' use. Nginx can achieve load balancing through reverse proxy.

Load balancing configuration:

   
upstream myapp {
   ip_hash;  # Regular visitors server 192.168.0.111:8080 weight=9;   # Application server 1 server 192.168.0.112:8080 weight=1;   # Application server 2 }
 

Virtual host configuration

Some websites have a large number of visits and need load balancing. However, not all websites are so excellent. Some websites need to save costs due to the small number of visits, and deploy multiple websites on the same server.
For example, www.aaa.com and www.bbb Com two websites are deployed on the same server, and the two domain names resolve to the same IP address, but users can open two completely different websites through the two domain names, which do not affect each other, just like accessing two servers, so it is called two virtual hosts.

Virtual host configuration:

     
server {
    listen 80 default_server;
    server_name _;
    return 444;   # Filter the requests of other domain names and return 444 status code
}
server {
    listen 80;
    server_name www.aaa.com;   # www.aaa.com domain name
    location / {
        proxy_pass http://localhost:8080;  # corresponding port number 8080
    }
}
server {
    listen 80;
    server_name www.bbb.com;   # www.bbb.com domain name
    location / {
        proxy_pass http://localhost:8081;  # corresponding port number 8081
    }
}
   

An application is opened on servers 8080 and 8081 respectively. The client accesses through different domain names according to the server_name can reverse proxy to the corresponding application server.

The principle of virtual Host is whether the Host in the HTTP request header matches the server_name. Interested students can study the HTTP protocol.

In addition, server_ The name configuration can also filter that someone maliciously points some domain names to your host server.

Nginx startup (method 1, applicable to CentOS7, systemctl management service)

CentOS7 system service script directory

User: a program that can run only after a user logs in. There is a user.

 
/usr/lib/systemd/user

System: a program that can run without login if it needs to be powered on. It is stored in the system service.

 
/usr/lib/systemd/system

Write service script

The service file name ends with.Service:

 
vim /usr/lib/systemd/system/nginx.service

Script content (fixed format):

     
[Unit]
Description=nginx
After=network.target
   
[Service]
Type=forking
PIDFile=/usr/install/nginx/logs/nginx.pid ExecStart=/usr/install/nginx/sbin/nginx ExecReload=/usr/install/nginx/sbin/nginx -s reload ExecStop=/usr/install/nginx/sbin/nginx -s stop PrivateTmp=true [Install] WantedBy=multi-user.target
   

The above paths must be absolute!! The values of ExecStart, ExecReload and ExecStop can also be the absolute path of the customized sh script file under "/etc/init.d". This is the following way to realize uWSGI startup:

 
/etc/init.d New under directory uWSGI Service startup script file"uwsgi-start.sh": 

#!/bin/sh
/pyvenv/bin/uwsgi --ini /pyvenv/src/eduonline/uwsgi.ini;
 
/etc/init.d New under directory uWSGI Service restart script file"uwsgi-restart.sh": 

#!/bin/sh
/pyvenv/bin/uwsgi --restart /pyvenv/src/eduonline/uwsgi.pid;
 
/etc/init.d New under directory uWSGI Service stop script file"uwsgi-stop.sh": 

#!/bin/sh
/pyvenv/bin/uwsgi --stop /pyvenv/src/eduonline/uwsgi.pid;

Note: absolute path is also used in sh script!! After saving, give readable and executable permission. Then write the service script file.

Set startup (powerful CentOS service management tool systemctl)

     
systemctl enable nginx.service      #".service"Can be omitted

# Additional orders are attached:
systemctl start nginx.service # start-up
systemctl restart nginx.service # Restart and the service will be suspended for a while
systemctl reload nginx.service    # Reloading the service configuration file is similar to restarting, but the service will not stop
systemctl stop nginx.service # stop it
systemctl disable nginx.service # Turn off startup
   

If you are prompted with "Failed to execute operation: Access denied", enter "systemctl daemon reexec" to solve the problem.

Nginx startup (method 2, applicable to CentOS7 and below)

First, create the nginx script file in the "/etc/init.d/" directory of the Linux system, and use the following command:

 
touch nginx       # establish
vim nginx         # edit

Add the following commands to the script:

 

Add executable permissions to all users after saving the script file:

 
chmod a+x /etc/init.d/nginx

First add nginx service to chkconfig management list:

 
chkconfig --add /etc/init.d/nginx

Set terminal mode startup:

 
chkconfig nginx on

Posted by Hitwalker on Mon, 01 Aug 2022 02:29:49 +0930