Netty’s Practice in Decryption of Honeycomb API Gateway Technology

  api, netty

The Hive Team of Yi Ren Loan, founded by Michael in 2013, uses Internet technology to promote the harmonious and healthy development of financial ecology. Since its establishment, it has been committed to the construction of a multi-dimensional data closed-loop platform. At present, the team has a scale of over 100 people, covering the capture and analysis of credit data of users such as credit reporting, e-commerce, finance, social networking, five insurances and one fund, and insurance. It is supplemented by advanced data analysis, mining, machine learning and other technologies to predict and assess the credit level and fraud risk of users, and comprehensively export services or products such as financial anti-fraud, social mapping, and automatic model customization.

At present, based on the real-time capture and analysis technology of user authorized data, combined with top big data technology, rapid iteration and independent innovation, Hive has formed a strong and leading aggregation and output capability.

In order to adapt to complete the strong service output capability of Hive, Hive has designed and developed its own API gateway system, which focuses on the functions of authentication, encryption and decryption, routing, flow restriction and so on, so that each service capture team pays attention to its core capture and analysis work, while API gateway system is more focused on security, traffic, routing and other issues, thus better ensuring the quality of the Hive service system. Today, I will take you to the details of the Netty thread pool technology practice of the decryption API gateway.

API gateway serves as a unified portal for the data opening platform of the Hive of Yiren Loan. All clients and consumers use various capture services through a unified API. From the point of view of object-oriented design, it is similar to appearance mode, packaging various implementation details and showing a unified call form to the outside.

In this paper, firstly, the project framework of API gateway is briefly introduced. Secondly, the characteristics of BIO and NIO are compared. Netty is introduced as the basic framework of the project. Then, the principle of Netty thread pool is introduced. Finally, the initialization of Netty thread pool, the initialization and startup of ServerBootstrap, and the binding process of channel and thread pool are deeply discussed, so that readers can understand the design thinking of Netty in carrying high concurrent access.

Project framework

图

Figure 1-API Gateway Project Framework

The figure depicts the processing flow of API gateway system and its relationship with service registration discovery, log analysis, alarm system and various crawlers. The API gateway system receives the request, encodes and decodes, authenticates, restricts the flow, encrypts and decrypts the request, and sends the request to an effective service node based on the Eureka service registration discovery module. The logs of gateway and capture system will be collected into elk platform for business analysis and alarm processing.

BIO vs NIO

API gateway carries several times the traffic of crawler, which improves the concurrent processing capability of server and shortens the response time of system. The choice of communication model is crucial, is it BIO or NIO?

Streamvs Buffer & blocking vs non-blocking

Bio is stream-oriented and reads and writes io. It can only process one or more bytes at a time. If the data is not read and written, the thread will wait for it all the time, instead of skipping io temporarily or waiting for asynchronous notification of completion of io reading and writing. The thread is stuck in IO reading and writing and cannot make full use of the limited thread resources of the machine, resulting in low throughput of the server, as shown in Figure 2. Nio is different from this. Facing Buffer, the thread does not need to stay on io reading and writing, and adopts the epoll mode of the operating system. Only when IO data is ready can the thread process it, as shown in Figure 3.

图
Figure 2–BIO Reads Data from Stream

图
Figure 3-NIO Reads Data from Buffer

Selectors

NIO’s selector enables one thread to monitor the reading and writing of multiple channels. Multiple channels are registered on one selector. This selector can monitor the data preparation of each channel, thus handling more connections with limited thread resources, as shown in Figure 4. Therefore, it can be said that NIO greatly improves the server’s ability to accept concurrent requests, while the server’s performance depends on the business processing time and the business thread pool model.

图

Figure 4-NIO Single Thread Manages Multiple Connections

However, BIO uses the request-per-thread mode, which uses one thread to receive TCP connection requests, establish links, and then dispatch the requests to the thread responsible for business logic processing, as shown in Figure 5. Once there are too many visits, it will cause the machine’s thread resources to be tight, causing requests to be delayed and even service downtime.

图

Figure 5-BIO Connect a Thread

After comparing JDK NIO with many NIO frameworks, the basic framework is built in view of Netty’s elegant design, easy-to-use API, superior performance, security support and API gateway using Netty as the communication model.

Netty thread pool

Considering the high concurrent access requirements of API gateway, the thread pool design is shown in fig. 6.

图

Figure 6-API Gateway Thread Pool Design

Netty’s thread pool concept is a bit like ForkJoinPool, not a large pool of threads waiting for a task queue, but each thread has a task queue. Moreover, Netty’s thread does not simply pull tasks in a blocking way, but does three things in each loop:

  • SelectKeys () to handle NIO’s events first
  • Then get the scheduled task of this thread and put it into the task queue of this thread.
  • Finally, execute the tasks submitted to this thread by other threads.

The proportion of time consumed in processing NIO events and other tasks in each cycle can also be controlled through the ioRatio variable, which is 50% by default. It can be seen that Netty’s threads do not block the idle days waiting for tasks at all, so instead of using a locked BlockingQueue for task queues, they use an unlocked MpscLinkedQueue(Mpsc is an abbreviation of multiple producer, single consumer)

NioEventLoopGroup initialization

The following is an analysis of the design and implementation details of Netty thread pool NioEventLoopGroup. the class hierarchy of NioEventLoopGroup is shown in fig. 7.

图

Figure 7 -NioEvenrLoopGroup Class Hierarchy

The creation process-method call is shown in the following figure.

图

Figure 8 -NioEvenrLoopGroup Create Call Relationship

图

NioEvenrLoopGroup is created by executing the construction method of the class MultithreadEventExecutorGroup.


/**

 * Create a new instance.

 *

 * @param nThreads          the number of threads that will be used by this instance.

 * @param executor          the Executor to use, or {@code null} if the default should be used.

 * @param chooserFactory    the {@link EventExecutorChooserFactory} to use.

 * @param args              arguments which will passed to each {@link #newChild(Executor, Object...)} call

 */

protected MultithreadEventExecutorGroup(int nThreads, Executor executor,

                                        EventExecutorChooserFactory chooserFactory, Object... args) {

    if (nThreads <= 0) {

        throw new IllegalArgumentException(String.format("nThreads: %d (expected: > 0)", nThreads));

    }

    if (executor == null) {

        executor = new ThreadPerTaskExecutor(newDefaultThreadFactory());

    }

    children = new EventExecutor[nThreads];

    for (int i = 0; i < nThreads; i ++) {

        boolean success = false;

        try {

            children[i] = newChild(executor, args);

            success = true;

        } catch (Exception e) { 

            throw new IllegalStateException("failed to create a child event loop", e);

        } finally {

            if (!success) {

                for (int j = 0; j < i; j ++) {

                    children[j].shutdownGracefully();

                }

                for (int j = 0; j < i; j ++) {

                    EventExecutor e = children[j];

                    try {

                        while (!e.isTerminated()) {

                            e.awaitTermination(Integer.MAX_VALUE, TimeUnit.SECONDS);

                        }

                    } catch (InterruptedException interrupted) {

                        // Let the caller handle the interruption.

                        Thread.currentThread().interrupt();

                        break;

                    }

                }

            }

        }

    }

    chooser = chooserFactory.newChooser(children);

    final FutureListener<Object> terminationListener = new FutureListener<Object>() {

        @Override

        public void operationComplete(Future<Object> future) throws Exception {

            if (terminatedChildren.incrementAndGet() == children.length) {

                terminationFuture.setSuccess(null);

            }

        }

    };

    for (EventExecutor e: children) {

        e.terminationFuture().addListener(terminationListener);

    }

    Set<EventExecutor> childrenSet = new LinkedHashSet<EventExecutor>(children.length);

    Collections.addAll(childrenSet, children);

    readonlyChildren = Collections.unmodifiableSet(childrenSet);

}

The details of creation are as follows:

  • The number of threads in the thread pool nThreads must be greater than 0;
  • If the executor is null, create the default executor, which is used to create threads (the newChild method uses the executor object);
  • Create each thread in the thread pool in turn, namely NioEventLoop. If one thread fails to be created, all threads previously created will be closed.
  • Chooser is a thread pool selector for selecting the next EventExecutor, which can be understood as selecting a thread to execute task; ;

For details of chooser’s creation, see below.

DefaultEventExecutorChooserFactory creates specific EventExecutorchooser according to the number of threads. If the number of threads is equal to 2 n, bitwise and alternative modulo operations can be used to save cpu computing resources. See the source code.


@SuppressWarnings("unchecked")

@Override

public EventExecutorChooser newChooser(EventExecutor[] executors) {

    if (isPowerOfTwo(executors.length)) {

        return new PowerOfTowEventExecutorChooser(executors);

    } else {

        return new GenericEventExecutorChooser(executors);

    }

} 

    private static final class PowerOfTowEventExecutorChooser implements EventExecutorChooser {

        private final AtomicInteger idx = new AtomicInteger();

        private final EventExecutor[] executors;



        PowerOfTowEventExecutorChooser(EventExecutor[] executors) {

            this.executors = executors;

        }



        @Override

        public EventExecutor next() {

            return executors[idx.getAndIncrement() & executors.length - 1];

        }

    }



    private static final class GenericEventExecutorChooser implements EventExecutorChooser {

        private final AtomicInteger idx = new AtomicInteger();

        private final EventExecutor[] executors;



        GenericEventExecutorChooser(EventExecutor[] executors) {

            this.executors = executors;

        }



        @Override

        public EventExecutor next() {

            return executors[Math.abs(idx.getAndIncrement() % executors.length)];

        }

    }

Details of the creation of newChild(executor, args) are shown below.

The newChild method of MultithreadEventExecutorGroup is an abstract method, so the newChild method of NioEventLoop is used, i.e. the constructor of nioeventloop is called.

图


    @Override

    protected EventLoop newChild(Executor executor, Object... args) throws Exception {

        return new NioEventLoop(this, executor, (SelectorProvider) args[0],

            ((SelectStrategyFactory) args[1]).newSelectStrategy(), (RejectedExecutionHandler) args[2]);

    }

Let’s look at the class hierarchy of NioEventLoop first.

图

NIOENTLOOP’s inheritance relationship is complicated. In AbstractScheduledEventExecutor, Netty implements NIOENTLOOP’s schedule function. That is, we can run some timed tasks by calling the schedule method of a NioEventLoop instance. in SingleThreadEventLoop, we also realize the function of task queue. through it, we can call the execute method of a NioEventLoop instance to add a task to the task queue, which is scheduled and executed by NioEventLoop.

In general, NioEventLoop shoulders two tasks. The first is to perform IO operations related to Channel as IO thread, including calling select to wait for ready IO events, reading and writing data and processing data, etc. The second task is used as a taskQueue to execute tasks in taskqueue, for example, the timing task submitted by the user calling eventLoop.schedule is also executed by this thread.

For the specific construction process, see below.

图

图

Create a task queue tailTasks (internally bounded LinkedBlockingQueue)

图

Create taskQueue taskqueue (internally bounded LinkedBlockingQueue) for threads and rejectedHandler for preventing system downtime due to too many tasks

Among them, tailTasks and taskQueue are both task queues with different priorities. taskQueue has higher priority than tailTasks and timed tasks have higher priority than taskQueue.

ServerBootstrap initialization and startup

After understanding the creation process of Netty thread pool NioEvenrLoopGroup, let’s look at how API gateway service ServerBootstrap uses thread pool to introduce services for high concurrent access.

See below for API gateway ServerBootstrap initialization and startup code


serverBootstrap = new ServerBootstrap();

bossGroup = new NioEventLoopGroup(config.getBossGroupThreads());

workerGroup = new NioEventLoopGroup(config.getWorkerGroupThreads());



serverBootstrap.group(bossGroup, workerGroup).channel(NioServerSocketChannel.class)

        .option(ChannelOption.TCP_NODELAY, config.isTcpNoDelay())

        .option(ChannelOption.SO_BACKLOG, config.getBacklogSize())

        .option(ChannelOption.SO_KEEPALIVE, config.isSoKeepAlive())

        // Memory pooled

        .option(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT)

        .childOption(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT)

        .childHandler(channelInitializer);

 

ChannelFuture future = serverBootstrap.bind(config.getPort()).sync();

log.info("API-gateway started on port: {}", config.getPort());

future.channel().closeFuture().sync();

API gateway system uses netty’s own thread pool, which consists of three groups of thread pools, namely bossGroup, workerGroup and executorGroup (used in channelInitializer, which will not be introduced in this article for the time being). Among them, bossGroup is used to receive TCP connections from clients, workerGroup is used to process I/O, execute system task and timing tasks, and executorGroup is used to process business operations such as gateway business encryption and decryption, flow restriction, routing, and forwarding requests to backend grabbing services.

Binding of Channel to Thread Pool

After the ServerBootstrap is initialized, start the Server by calling the bind(port) method. the call chain of bind is as follows

AbstractBootstrap.bind ->AbstractBootstrap.doBind -> AbstractBootstrap.initAndRegister

Where, channelfuture regfuture = config (). group (). register (channel); The group () method in returns the bossGroup, and the channel is designated as NioServerSocketChannel.class in the initialization process of the serverBootstrap, thus binding NioServerSocketChannel and bossGroup together, and bossGroup is responsible for establishing the client connection. So how does NioSocketChannel bind to workerGroup?

Call chain abstract bootstrap. initandargester-> abstract bootstrap. init-> serverbootstrap. init-> serverbootstrap. serverbootstrap-> serverbootstrap. channelread




public void channelRead(ChannelHandlerContext ctx, Object msg) {

    final Channel child = (Channel) msg;

    child.pipeline().addLast(childHandler);

    for (Entry<ChannelOption<?>, Object> e: childOptions) {

        try {

            if (!child.config().setOption((ChannelOption<Object>) e.getKey(), e.getValue())) {

                logger.warn("Unknown channel option: " + e);

            }

        } catch (Throwable t) {

            logger.warn("Failed to set a channel option: " + child, t);

        }

    }

    for (Entry<AttributeKey<?>, Object> e: childAttrs) {

        child.attr((AttributeKey<Object>) e.getKey()).set(e.getValue());

    }

    try {

        childGroup.register(child).addListener(new ChannelFutureListener() {

            @Override

            public void operationComplete(ChannelFuture future) throws Exception {

                if (!future.isSuccess()) {

                    forceClose(child, future.cause());

                }

            }

        });

    } catch (Throwable t) {

        forceClose(child, t);

    }

}

Among them, childGroup.register(child) is to bind NioSocketChannel and workderGroup together. What triggered the channelRead method of ServerBootstrapAcceptor?

In fact, when a client connects to the server, the NIO ServerSocketChannel at the bottom of Java will have a SelectionKey.OP_ACCEPT ready event, and then it will call the Nio Server SocketChannel. DoreadMessages method.


@Override

protected int doReadMessages(List<Object> buf) throws Exception {

    SocketChannel ch = javaChannel().accept();

    try {

        if (ch != null) {

            buf.add(new NioSocketChannel(this, ch));

            return 1;

        }

    } catch (Throwable t) {

        …

    }

    return 0;

}

JavaChannel().accept () will get the SocketChannel newly connected by the client, instantiate it as a NioServerSocketChannel, and pass in the NioServerSocketChannel object (this). Therefore, it can be seen that the parent channel of the NIOServerSocketChannel we created is the NIOServerSocketChannel instance.

Next, through Netty’s ChannelPipeline mechanism, read events are sent to each handler step by step, thus triggering the previously mentioned ServerBootstrap Processor. Channel Read method.

So far, the initialization of Netty thread pool, the startup of ServerBootstrap and the binding process between channel and thread pool are analyzed. The elegant design of thread pool in Netty can be seen. Different thread pools are used for connection establishment, IO reading and writing, etc., which provide technical basis for high concurrent access of API gateway project.

Summary

At this point, the Netty practice sharing of API gateway technology has come to an end. If you have any questions and suggestions about the various links in the middle, please correct them. We will discuss them together and learn how to improve them together.

References

http://tutorials.jenkov.com/j …

http://netty.io/wiki/user-gui …

http://netty.io/

http://www.tuicool.com/articl …

https://segmentfault.com/a/11 …

https://segmentfault.com/a/11 …

Author: Honeycomb TeamYixin Institute of Technology