Consul server configuration
The greatest advantage brought by microservices is to divide the whole large project into different services and run them on different servers to realize decoupling and distributed processing. Although microservices have many advantages, they also have some disadvantages. Everything has two sides. Operation and maintenance in micro-services will be a big problem. If one day we have a very large number of services, then we don’t know which service is on which machine. Some people may say that it is good to write this part directly into the configuration of the program. We can do this when we have few services, and we are also allowed to do this, but in practice we should try our best to avoid doing this. For example, if one of our services has changed its address, then the relevant code we designed will have to be modified and redeployed. In other words, one day we will go online with a new service or offline with a service, and then we will have to modify the program code, which is very unreasonable. So is there anything that can solve such a problem? Here we need to use our service registration and discovery.
There is no structure for service registration discovery
< center > no schema for service discovery < /center >
In the above picture, we can see that when there is no service registration discovery, a caller needs to maintain ip and ports of multiple services. This is a very bad practice. When our services are adjusted, it may lead to service call failure, as well as server replacement and new services will be affected. In the future, if a service node has problems, troubleshooting will be a great disaster for the program and operation personnel, because it is not known which node has problems and each server needs to be checked.
And what does the structure look like when we have discovered it using service registration?
There is a structure for service registration discovery.
< center > structure with service registration discovery < /center >
We can see from the above figure that when we have a registration center, the caller does not need to maintain all the service information by himself, but only needs to request to obtain the service from the registration center to obtain the desired service information. In this way, when our service is adjusted, or online and offline services, it should be easy to operate, and the health of the service can be checked in the registration process to help operation and maintenance personnel quickly locate the failed server.
What Swoft recommends is Consul, a service registration, discovery and configuration management system written by go.
- Agent：Agent is a background resident program in Consul cluster. Agent has two modes, one is server and the other is client. All Agent can run DNS or HTTP interfaces and are responsible for checking whether the service is alive and keeping the service synchronized.
- Client：Client is an Agent mode of operation. It forwards all RPC to the Agent server. The client will have an Agent service in the background that forw ards requests to the back-end with minimum bandwidth consumption to reduce the pressure on the Agent server.
- Server：Server is another Agent’s operation mode, including using Raft algorithm to process data, maintaining cluster state, responding to RPC requests, exchanging data with servers in other clusters or remote data centers.
- RPC: Remote Procedure Call, a request/response mechanism that allows clients to issue server requests.
These are some of our more important conceptual things this time. Knowing what each component and each part are doing, we can see that the following configuration is simple and will get twice the result with half the effort.
There are many components in Consul. We can only use one component temporarily here, that is, agent. The installation of Consul is also very simple. After all, there is only one binary package, so it can be directly downloaded and used.
1. Log in to the official website for downloading.Download address
wget https://releases.hashicorp.com/consul/1.2.1/consul_1.2.1_linux_amd64.zip unzip consul_1.2.1_linux_amd64.zip
2. Set environment variables. If not, consul execution files can be directly moved to the /usr/bin directory
mv consul /usr/bin
Ok, after the installation is successful, we will make some configuration to enable consul next.
- Server 1, IP 192.168.1.100
This method is suitable for service debugging
consul agent -bootstrap-expect 1 -server -data-dir /data/consul -node=swoft01 -bind=0.0.0.0 -config-dir /etc/consul.d -enable-script-checks=true -datacenter=sunny -client=0.0.0.0 -ui
Throughhttp://192.168.1.100: 8500 View Service Information
This method is suitable for production environment
- Server 1, IP 192.168.1.100
consul agent -bootstrap-expect 2 -server -data-dir /data/consul -node=swoft01 -bind=0.0.0.0 -client=0.0.0.0 -config-dir /etc/consul.d -enable-script-checks=true -datacenter=sunny -client=0.0.0.0
The above command is to start an agent in server mode. there are two expansion machines in the cluster. the persistent data of the cluster is set to be stored under/data/consol0. the node name is swoft01, and the binding address is 0.0.0.0. the service configuration file is stored in/etc/consol.d. the heartbeat is started. the name of the data center is dc1, and the accessible client address is 0.0.0.0
- Server 2, IP 192.168.1.110
consul agent -server -data-dir /data/consul -node=swoft02 -bind=0.0.0.0 -client=0.0.0.0 -config-dir /etc/consul.d -enable-script-checks=true -datacenter=sunny -join 192.168.1.100
- Server 3, IP 192.168.1.120
consul agent -server -data-dir /data/consul -node=swoft03 -bind=0.0.0.0 -client=0.0.0.0 -config-dir /etc/consul.d -enable-script-checks=true -datacenter=sunny -join 192.168.1.100
The above server 2 and service 3 are used
-joinJoin the cluster and use the same data name sunny
- Server 4, IP 192.168.1.130
consul agent -ui -data-dir /data/consul -node=swoft04 -bind=0.0.0.0 -config-dir /etc/consul.d -enable-script-checks=true -datacenter=sunny -ui -client=0.0.0.0 -join 192.168.1.100
If the client does not use -server, it will run in client mode. other parameters are the same as above. after both the server and the client are started, they can be entered in the browser.http://192.168.1.130: 8500 to view information
To view cluster members:
To view cluster information:
-bootstrap-expectThe number of servers expected in the data center. This value should not be provided or must be consistent with other servers in the cluster. Once provided, Consul will wait for the specified number of servers to become available before booting the cluster. This allows the initial leader to be selected automatically. This cannot be used with the traditional -bootstrap flag. This flag needs to be run in server mode.
-serverStart in server mode
-data-dirThe data storage location, which is used to persist the cluster state.
-nodeThe name of this node in the cluster. This must be unique in the cluster. By default, this is the host name of the computer.
-bindIp address of binding server
-config-dirSpecify the configuration file service, when files with. json end in this directory will be loaded, for more configuration, please refer toConfiguration template
-enable-script-checksCheck if the service is active, similar to starting a heartbeat.
-datacenterData center name
-clientClients can access ip, including HTTP and DNS servers. By default, this is “127.0.0.1” and only loopback connections are allowed.
-uiOpen the ui interface of the web
-joinJoin an existing cluster
For more information on consul, please step toConsul.com