Gateway bottleneck

  network

In spite of this, the gateway serving as the NAT server has also become a bottleneck restricting the expansion of the cluster. We know that the NAT server will not only forward the user’s request to the actual server, but also forward the response from the actual server to the user. Therefore, when the actual number of servers is large and the response data flow is large, the response data packets from multiple actual servers may be crowded at the NAT server.
Obviously, the time has come to test the forwarding capability of the NAT server. As forwarding packets work in the kernel, we can hardly consider the extra overhead. Therefore, the forwarding capability mainly depends on the network bandwidth of the NAT server, including the internal network and the external network.
For example, if the NAT server forms an internal network with a number of actual servers through a 100Mbps switch, we know that the bandwidth from these actual servers to the NAT server is shared at 100Mbps through the previous chapter on bandwidth. thus, although the actual servers themselves can easily reach 100Mbps response traffic, such as providing download service, etc., the 100Mbps exit bandwidth of the NAT server becomes a constraint, making the entire cluster can only provide 100Mbps service at most no matter how many actual servers are added.
To solve the bottleneck of gateway bandwidth is not difficult, we can use gigabit network cards for NAT servers and gigabit switches for internal networks.