Golang Gin Practice Serialization 17 Deploying Go Applications with Nginx

  golang, nginx, php

Golang Gin Practice Serialization 17 Deploying Go Applications with Nginx

Original address:Golang Gin Practice Serialization 17 Deploying Go Applications with Nginx

Preface

If you have read the previous “16 serials and 2 extras”, I believe your ability has been improved.

Now, let’s talk about the simple deployment of back-end services today.

What to do

In this chapter, we will briefly introduce Nginx and use Nginx to complete the pairinggo-gin-exampleThe deployment of, will realize the function of reverse proxy and simple load balancing

Nginx

What is it?

Nginx is a Web Server that can be used as a reverse proxy, load balancing, mail proxy, TCP/UDP, HTTP server, etc. It has many attractive features, such as:

  • Handle more than 10,000 concurrent connections with low memory usage (keep active connections about 2.5 MB every 10k inactive HTTP)
  • Static Server (Processing Static Files)
  • Forward and reverse agents
  • load balancing
  • Support TLS/SSL with SNI and OCSP via OpenSSL
  • FastCGI, SCGI, uWSGI support
  • WebSockets, HTTP/1.1 support
  • Nginx + Lua

Installation

Please turn right to Google or Baidu and install Nginx for the next use.

Simple explanation

Common commands

  • Nginx: start nginx
  • Nginx-stopstop: stop nginx service immediately
  • Nginx-sroad: reload configuration file
  • Nginx -s quit: smoothly stop the nginx service
  • Nginx -t: test whether the configuration file is correct
  • Nginx -v: displays Nginx version information
  • Nginx -V: displays Nginx version information, compiler and configuration parameter information

Configuration involved

1. proxy_pass: configurationPath to reverse proxy. Note that if the url of proxy_pass ends with
/,which represents an absolute path. Otherwise (without variables) indicates relative paths, and all paths will be represented in the past.

2. upstream: Configurationload balancing, upstream is loaded by polling by default, and also supportsFour modes, respectively is:

(1)weight: weight, specifying the probability of polling, weight is proportional to the access probability

(2)ip_hash: distributed according to hash result value of access IP

(3)fair: allocate according to the response time of the back-end server, the shorter the response time, the higher the priority level

(4)url_hash: distributed according to the hash result value of the visited URL

Deployment

Nginx.conf needs to be configured here. if you do not know which configuration file is corresponding, it can be executednginx -thave a look

$ nginx -t
nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax is ok
nginx: configuration file /usr/local/etc/nginx/nginx.conf test is successful

Obviously, my profile is in/usr/local/etc/nginx/Directory, and the test passed

Reverse proxy

Reverse proxy refers to a proxy server that accepts connection requests on the network, then forwards the requests to servers on the internal network, and returns the results obtained from the servers to clients requesting connection. At this time, the proxy server appears as a reverse proxy server externally. (from)Encyclopedia)

image

Configure hosts

Since this machine is required for demonstration, the mapping is first matched and opened./etc/hosts, add content:

127.0.0.1       api.blog.com

Configure nginx.conf

Open nginx’s configuration file nginx.conf (mine is/usr/local/etc/NGINX/NGINX.conf), we did the following:

Add the content of the server fragment, set server_name as api.blog.com and listen to port 8081, forward all paths tohttp://127.0.0.1:8000/under

worker_processes  1;

events {
    worker_connections  1024;
}


http {
    include       mime.types;
    default_type  application/octet-stream;

    sendfile        on;
    keepalive_timeout  65;

    server {
        listen       8081;
        server_name  api.blog.com;

        location / {
            proxy_pass http://127.0.0.1:8000/;
        }
    }
}

Verification

启动 go-gin-example

Go back togo-gin-example, execute make, and then run./go-gin-extmaple

$ make
github.com/EDDYCJY/go-gin-example
$ ls
LICENSE        README.md      conf           go-gin-example middleware     pkg            runtime        vendor
Makefile       README_ZH.md   docs           main.go        models         routers        service
$ ./go-gin-example 
...
[GIN-debug] DELETE /api/v1/articles/:id      --> github.com/EDDYCJY/go-gin-example/routers/api/v1.DeleteArticle (4 handlers)
[GIN-debug] POST   /api/v1/articles/poster/generate --> github.com/EDDYCJY/go-gin-example/routers/api/v1.GenerateArticlePoster (4 handlers)
Actual pid is 14672
重启 nginx
$ nginx -t
nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax is ok
nginx: configuration file /usr/local/etc/nginx/nginx.conf test is successful
$ nginx -s reload
访问接口

image

In this way, a simple reverse proxy is realized. Is it very simple

load balancing

Load Balance, English name is load balance (often called LB), which means that it is distributed to multiple operation units for execution (from encyclopedia)

You can often hear from the operation and maintenance department that XXX load is suddenly so high. So what is it?

There are usually several server behind it. The system will dynamically adjust according to the configured policies (for example, Nginx provides four choices) to achieve the balance of each node as much as possible, thus improving the overall throughput and fast response of the system.

How to demonstrate

The prerequisite is multiple back-end services, so it is bound to require multiplego-gin-exampleIn order to demonstrate that we can start multiple ports to achieve the simulation effect

In order to facilitate the demonstration, the application ports of conf/app.ini are modified to 8001 and 8002 (also can be made into the mode of incoming parameters) respectively before starting, so as to start two back-end services listening to 8001 and 8002.

Configure nginx.conf

Back to the old place of nginx.conf, increase the configuration required for load balancing. Add an upstream node, set its corresponding 2 back-end services, and finally modify the proxy_pass pointing (formathttp://Node Name of+upstream)

worker_processes  1;

events {
    worker_connections  1024;
}


http {
    include       mime.types;
    default_type  application/octet-stream;

    sendfile        on;
    keepalive_timeout  65;

    upstream api.blog.com {
        server 127.0.0.1:8001;
        server 127.0.0.1:8002;
    }

    server {
        listen       8081;
        server_name  api.blog.com;

        location / {
            proxy_pass http://api.blog.com/;
        }
    }
}
重启 nginx
$ nginx -t
nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax is ok
nginx: configuration file /usr/local/etc/nginx/nginx.conf test is successful
$ nginx -s reload

Verification

Repeat accesshttp://api.blog.com:8081/auth? username={USER_NAME}}&password={PASSWORD}, visit several times more to see the effect

At present, Nginx has no special configuration, so it is a polling strategy, while go-gin-example is in debug mode by default. Just look at the request log and you will understand.

image

image

Summary

In this chapter, I hope you can simply learn what logic is behind the daily use of Web Server and what is Nginx? Reverse proxy? Load balancing?

How simple deployment, you know.

References

This series of sample codes

This series of catalogues