Nginx knowledge for front-end developers

The Role of nginx in Applications

  • Solve cross-domain problems
  • Request filtering
  • Configure gzip
  • load balancing
  • Static resource server

Nginx is a high-performance HTTP and reverse proxy server as well as a general TCP/UDP proxy server, originally written by Russian Igor Sysoev.

Nginx is now almost the necessary technology for many large websites. In most cases, we do not need to configure it in person, but it is very necessary to understand its role in the application and how to solve these problems.

Next, I will explain the role nginx plays in applications from the real application of nginx in enterprises.

In order to make it easier to understand, let’s first take a look at some basic knowledge.Nginx is a high performance reverse proxy serverSo what is reverse proxy?

Forward Agent and Reverse Agent

AgentIt is a layer of server assumed between the server and the client.AgentWill receive the client’s request and forward it to the server, and then forward the server’s response to the client.

Whether it is a forward proxy or a reverse proxy, it implements the above functions.

![image](https://lsqimg-1257917459.cos-website.ap-beijing.myqcloud.com/blog/nginx2.png)

Forward proxy

Forward proxy, which means a server located between the client and the origin server. In order to obtain content from the origin server, the client sends a request to the proxy and specifies the target (the origin server), and then the proxy forwards the request to the origin server and returns the obtained content to the client.

Forward proxyIt serves us, that is, the client. The client can access server resources that it cannot access according to the forward proxy.

Forward proxyIt is transparent to us and non-transparent to the server, that is, the server does not know whether it receives the access from the agent or the real client.

Reverse proxy

Reverse proxy(Reverse Proxy) means that a proxy server receives connection requests on the internet, then forwards the requests to servers on the internal network, and returns the results obtained from the servers to clients requesting connection on the internet. At this time, the proxy server appears as a reverse proxy server externally.

Reverse proxyIt serves the server. Reverse proxy can help the server to receive requests from clients, help the server to do request forwarding, load balancing, etc.

Reverse proxyIt is transparent to the server and non-transparent to us, that is, we do not know that we are accessing a proxy server, and the server knows that the reverse proxy is serving him.
Picture description

Basic configuration

Configuration structure

The following is the basic structure of a nginx configuration file:

events {
 
 }
 
 http
 {
 server
 {
 location path
 {
 ...
 }
 location path
 {
 ...
 }
 }
 
 server
 {
 ...
 }
 
 }
  • main: The global configuration of nginx is effective for the global.
  • events: Configuration Affects nginx Server or Network Connections with Users.
  • http: You can nest multiple server, configure agents, cache, log definitions, and most other functions and configuration of third-party modules.
  • server: Configure relevant parameters of virtual host. There can be multiple server in an http.
  • location: Configure the routing of requests and the processing of various pages.
  • upstream: Configure the specific address of the back-end server, which is an indispensable part of load balancing configuration.

Built-in variable

The following isnginxSome built-in global variables commonly used in configuration, you can use them anywhere in the configuration.

| variable name | function |
| —— | —— |
|$host| in the request informationHostIf not included in the requestHostLine, is equal to the set server name |
|$request_method| client request type, such asGETPOST
|$remote_addr| client’sIPAddress |
|$args| Parameters in Request |
|$content_length| in the request headerContent-lengthField |
|$http_user_agent| client agent information |
|$http_cookie| client cookie information |
|$remote_addr| IP address of client |
|$remote_port| Port for Client |
|$server_protocol| protocol requested, such asHTTP/1.0、·HTTP/1.1` |
|$server_addrServer address
|$server_name|server name |
|$server_portPort number of the server

Solve cross-domain problems

Let’s go back to the source and see what is going on across domains.

Cross-domain definition

Homologous policies limit how documents or scripts loaded from the same source interact with resources from another source. This is an important security mechanism for isolating potentially malicious files. Read operations between different sources are generally not allowed.

Definition of homology

If the protocol, port (if specified) and domain name of the two pages are the same, the two pages have the same source.

image

Nginx’s Principle of Solving Cross-Domain Problems

For example:

  • The domain name of the front-end server is:fe.server.com
  • The domain name of the backend service is:dev.server.com

Now I’m herefe.server.comYesdev.server.comCross-domain is bound to occur when a request is initiated.

Now we just need to start up a nginx server, which willserver_nameSet tofe.server.com, and then set the corresponding location to intercept the front-end cross-domain requests, and finally proxy the requests backdev.server.com. Such as the following configuration:

server {
listen       80;
server_name   fe.server.com ;
location / {
proxy_pass  dev.server.com ;
}
}

This will perfectly bypass the browser’s cognate strategy:fe.server.comAccessnginxThefe.server.comBelongs to the homologous access, andnginxRequests forwarded to the server will not trigger the browser’s homologous policy.

Request filtering

! [image](https://lsqimg-1257917459.cos-website.ap-beijing.myqcloud.com/blog/404.jpg)

Filter according to status code

error_page 500 501 502 503 504 506 /50x.html;
location = /50x.html {
# Adapt the following path to the path where html is stored.
root /root/static/html;
}

According to URL name filtering, the URL is accurately matched, and all mismatched URLs are redirected to the home page.

location / {
 rewrite  ^.*$ /index.html  redirect;
 }

Filter by request type.

if ( $request_method !  ~ ^(GET|POST|HEAD)$ ) {
 return 403;
 }

Configure gzip

! [image](https://lsqimg-1257917459.cos-website.ap-beijing.myqcloud.com/blog/gzip.jpg)

GZIPIt is one of the three standard HTTP compression formats specified. At present, the vast majority of websites are in use.GZIPTransmissionHTMLCSSJavaScriptSuch as resource files.

For text files,GZipThe effect is very obvious, and the traffic required for transmission will drop to about1/4 ~ 1/3.

Not every browser supports it.gzip, how do I know if the client supportsgzipIn the request headerAccept-EncodingTo identify support for compression.

image

EnablegzipAt the same time need the support of the client and the server, if the client supportgzipAs long as the server can returngzipThe file of can be enabledgzipYes, we can passnginxTo enable the server to supportgzip. BelowresponeIncontent-encoding:gzip, refers to the server openedgzipThe compression method of.

! [image](https://lsqimg-1257917459.cos-website.ap-beijing.myqcloud.com/blog/gzip2.png)

gzip                    on;
gzip_http_version       1.1;
gzip_comp_level         5;
gzip_min_length         1000;
gzip_types text/csv text/xml text/css text/plain text/javascript application/javascript application/x-javascript application/json application/xml;

gzip

  • Open or closegzipmodule
  • The default value isoff
  • Configurable ason / off

gzip_http_version

  • EnableGZipRequiredHTTPMinimum version
  • The default value isHTTP/1.1

Why isn’t the default version here1.0?

HTTPRun onTCPConnection, nature also has a heelTCPThe same three-way handshake, slow start and other characteristics.

When persistent connection is enabled, the server responds and letsTCPThe connection continues to open. Subsequent requests and responses between the same pair of clients/servers can be sent through this connection.

! [image](https://lsqimg-1257917459.cos-website.ap-beijing.myqcloud.com/blog/keepalive.png)

In order to improve as much as possibleHTTPPerformance, the use of persistent connections is particularly important.

HTTP/1.1Default supportTCPPersistent connection,HTTP/1.0It can also be specified explicitlyConnection: keep-aliveTo enable persistent connections. ForTCPOn a persistent connectionHTTPMessage, the client needs a mechanism to accurately determine the end position, while inHTTP/1.0This mechanism onlyContent-Length. And inHTTP/1.1Added inTransfer-Encoding: chunkedThe corresponding block transmission mechanism can perfectly solve such problems.

nginxAlso has a configurationChunkedAttributechunked_transfer_encoding, which is turned on by default.

NginxOnGZipUnder the condition of, won’t wait for documentsGZipThe response is returned after completion, but the response is compressed at the same time, which can be significantly improved.TTFB(Time To First Byte, first byte time, an important indicator of WEB performance optimization). The only problem is,NginxWhen it starts to return a response, it cannot know how big the file to be transferred will eventually be, that is, it cannot give it.Content-LengthThis response header.

So, inHTTP1.0If used 2NginxEnabledGZip, is not availableContent-Length, which causes persistent links to be opened and used in HTTP1.0GZipThere is only one choice, so here it isgzip_http_versionThe default setting is1.1.

gzip_comp_level

  • The compression level, the higher the level, the greater the compression rate, and of course the longer the compression time (fast transmission but cpu consumption).
  • The default value is1
  • The compression level is1-9

gzip_min_length

  • Set the minimum number of bytes allowed to compress the page.Content-LengthRequests smaller than this value will not be compressed
  • Default value:0
  • When the set value is small, the compressed length may be larger than the original file. It is recommended to set it1000Above

gzip_types

  • File types to be compressed with gzip (MIMEType)
  • Default value:text/html(No compression by defaultjs/css)

load balancing

What is load balancing

! [image](https://lsqimg-1257917459.cos-website.ap-beijing.myqcloud.com/blog/nginx3.jpg)

As shown in the figure above, there are many service windows in front and many users need services below. We need a tool or strategy to help us allocate so many users to each window to achieve full utilization of resources and less queuing time.

Think of the front service window as our back-end server, while countless clients are initiating requests at the back end. Load balancing is used to help us reasonably distribute numerous client requests to various servers so as to achieve full utilization of server resources and less request time.

How does nginx Realize Load Balancing

Upstream specifies a list of backend server addresses

upstream balanceServer {
server 10.1.22.33:12345;
server 10.1.22.34:12345;
server 10.1.22.35:12345;
}

Intercept the response request in the server and forward the request to the list of servers configured in Upstream.

server {
 server_name   fe.server.com ;
 listen 80;
 location /api {
 proxy_pass http://balanceServer;
 }
 }

The above configuration only specifies the list of servers that nginx needs to forward, and does not specify the allocation policy.

Nginx’s Load Balancing Strategy

! [image](https://lsqimg-1257917459.cos-website.ap-beijing.myqcloud.com/blog/loadBalancing.png)

Polling policy

The policy adopted by default allocates all client request polls to the server. This strategy can work normally, but if one of the servers is under too much pressure and delays occur, all users assigned to this server will be affected.

upstream balanceServer {
 server 10.1.22.33:12345;
 server 10.1.22.34:12345;
 server 10.1.22.35:12345;
 }

! [image](https://lsqimg-1257917459.cos-website.ap-beijing.myqcloud.com/blog/nginx5.png)

Minimum Connection Number Policy

Prioritizing requests to less stressed servers balances the length of each queue and avoids adding more requests to more stressed servers.

upstream balanceServer {
least_conn;
server 10.1.22.33:12345;
server 10.1.22.34:12345;
server 10.1.22.35:12345;
}

! [image](https://lsqimg-1257917459.cos-website.ap-beijing.myqcloud.com/blog/nginx4.png)

Fastest response time strategy

Relying on NGINX Plus, it is preferentially allocated to the server with the shortest response time.

upstream balanceServer {
 fair;
 server 10.1.22.33:12345;
 server 10.1.22.34:12345;
 server 10.1.22.35:12345;
 }

Client ip binding

Requests from the same ip are always assigned to only one server, effectively solving the session sharing problem of dynamic web pages.

upstream balanceServer {
 ip_hash;
 server 10.1.22.33:12345;
 server 10.1.22.34:12345;
 server 10.1.22.35:12345;
 }

Static resource server

location ~* \.(png|gif|jpg|jpeg)$ {
root    /root/static/;
autoindex on;
access_log  off;
expires     10h;  # Set Expiration Time to 10 Hours
}

Match topng|gif|jpg|jpegIs the ending request and forwards the request to the local path.rootThe path specified in is the nginx local path. At the same time, some cache settings can also be made.

Summary

Nginx is very powerful, and there is still much to explore. Some of the above configurations are real applications of the company’s configuration (simplified).

If there are any mistakes in the article, please correct them in the comment area. If this article helps you, please comment and pay attention.

If you want to read more quality articles, please pay attention to me.github博客Your star✨, praise and attention are the driving force of my continuous creation!

Recommended for everyoneFundebug, a very useful BUG monitoring tool ~

It is recommended to pay close attention to my WeChat public number “code Secret Garden” and push high-quality articles every day so that we can communicate and grow together.

图片描述