Tomcat thread pool policy



Tomcat’s thread pool extends jdk’s executor, and the queue uses its own task queue, so its strategy is different from jdk’s. Attention should be paid, otherwise it is easy to step on the pit.

Tomcat thread pool policy

  • Scenario 1: Accept a request when tomcat’s number of threads has not reached CoreBoost (The name in tomcat is minSpareThreads.), tomcat starts a thread to process the request;

  • Scenario 2: Accept a request when tomcat starts threads that have reached CoreBoost. tomcat puts the request into a queue (offer), if putting into the queue is successful, return, if putting into the queue is not successful, try to increase the working thread, when the current thread number < maxThreads, you can continue to add threads to process, when exceeding maxThreads, you will continue to put into the waiting queue, if the waiting queue cannot be put into the waiting queue, you will throw RejectedExecutionException;;

It is worth noting that if LinkedBlockingQueue is used, the default is Integer.MAX_VALUE, i.e. unbounded queue (in this case, if the capacity of the queue is not configured, the queue will never be full, and then it will never reach the point where the number of maxThreads is reached by opening new threads, then it is actually meaningless to configure maxThreads at this time).

The queue capacity of TaskQueue is maxQueueSize, which is also Integer.MAX_VALUE by default. But,It rewrites the offer method and returns false when its thread pool size is smaller than maximumPoolSize, i.e. it rewrites the logic with full queue to a certain extent, fixed the “bug” that maxThreads failed when LinkedBlockingQueue’s default capacity was Integer.MAX_VALUE Therefore, the thread can continue to grow to maxThreads, and after exceeding, the thread can continue to be put into the queue.

Offer operation of TaskQueue

    public boolean offer(Runnable o) {
      //we can't do any checks
        if (parent==null) return super.offer(o);
        //we are maxed out on threads, simply queue the object
        if (parent.getPoolSize() == parent.getMaximumPoolSize()) return super.offer(o);
        //we have idle threads, just add it to the queue
        if (parent.getSubmittedCount()<(parent.getPoolSize())) return super.offer(o);
        //if we have less threads than maximum force creation of a new thread
        if (parent.getPoolSize()<parent.getMaximumPoolSize()) return false;
        //if we reached here, we need to add it to the queue
        return super.offer(o);


     * Start the component and implement the requirements
     * of {@link org.apache.catalina.util.LifecycleBase#startInternal()}.
     * @exception LifecycleException if this component detects a fatal error
     *  that prevents this component from being used
    protected void startInternal() throws LifecycleException {

        taskqueue = new TaskQueue(maxQueueSize);
        TaskThreadFactory tf = new TaskThreadFactory(namePrefix,daemon,getThreadPriority());
        executor = new ThreadPoolExecutor(getMinSpareThreads(), getMaxThreads(), maxIdleTime, TimeUnit.MILLISECONDS,taskqueue, tf);
        if (prestartminSpareThreads) {


It is worth noting that tomcat’s thread pool uses its own extended taskQueue instead of LinkedBlockingQueue used in the Executors factory method. (It mainly modifies the logic of offer.)

MaxQueueSize here defaults to

     * The maximum number of elements that can queue up before we reject them
    protected int maxQueueSize = Integer.MAX_VALUE;


     * Executes the given command at some time in the future.  The command
     * may execute in a new thread, in a pooled thread, or in the calling
     * thread, at the discretion of the <tt>Executor</tt> implementation.
     * If no threads are available, it will be added to the work queue.
     * If the work queue is full, the system will wait for the specified
     * time and it throw a RejectedExecutionException if the queue is still
     * full after that.
     * @param command the runnable task
     * @param timeout A timeout for the completion of the task
     * @param unit The timeout time unit
     * @throws RejectedExecutionException if this task cannot be
     * accepted for execution - the queue is full
     * @throws NullPointerException if command or unit is null
    public void execute(Runnable command, long timeout, TimeUnit unit) {
        try {
        } catch (RejectedExecutionException rx) {
            if (super.getQueue() instanceof TaskQueue) {
                final TaskQueue queue = (TaskQueue)super.getQueue();
                try {
                    if (!queue.force(command, timeout, unit)) {
                        throw new RejectedExecutionException("Queue capacity is full.");
                } catch (InterruptedException x) {
                    throw new RejectedExecutionException(x);
            } else {
                throw rx;


Note that the jdk thread pool’s default Rejected rule has been rewritten here, that is, catch has rejected the RejectedExecutionException. The normal jdk rule is to throw a RejectedExecutionException when the number of core threads+temporary threads > maxSize. If catch lives here, continue to put it in taskQueue.

public boolean force(Runnable o, long timeout, TimeUnit unit) throws InterruptedException {
        if ( parent==null || parent.isShutdown() ) throw new RejectedExecutionException("Executor not running, can't force a command into the queue");
        return super.offer(o,timeout,unit); //forces the item onto the queue, to be used if the task is rejected

Note that the super.offer(o,timeout,unit) called here, i.e. LinkedBlockingQueue, will only throw a RejectedExecutionException if it returns false when the column is full. (This changes the logic thrown by the RejectedExecutionException of jdk's ThreadPoolExecutor, that is, beyond maxThreads, it will not throw RejectedExecutionException, but will continue to throw tasks into the queue, and taskQueue itself is unbounded, so it can almost not throw RejectedExecutionException by default.)

JDK thread pool policy

  1. Each time a task is submitted, if the number of threads has not reached coreSize, a new thread is created and the task is bound. Therefore, the total number of threads must reach coreSize after the task is submitted for the first coreSize, and the previous idle threads will not be reused.

  2. After the number of threads reaches coreSize, the new tasks are put into the work queue, while the threads in the thread pool try to use take () to pull the work from the work queue.

  3. If the queue is a bounded queue, and if the threads in the thread pool cannot take away the tasks in time, the work queue may be full and the inserted tasks will fail. At this time, the thread pool will urgently create new temporary threads to remedy the situation.

  4. Temporary threads use poll(keepAliveTime, timeUnit) to pull jobs from the work queue. if the time comes and they still don’t pull jobs with empty hands, it indicates that they are too idle and will be fired.

  5. If the number of core threads+temporary threads > maxSize, no new temporary threads can be created and RejectExecutionHanlder will be executed instead. The default AbortPolicy throws a RejectedExecutionExcept ion exception. Other options include silently discarding the current task, discarding the oldest task in the work queue, or directly executing by the main thread (CallerRuns).

Source code

     * Executes the given task sometime in the future.  The task
     * may execute in a new thread or in an existing pooled thread.
     * If the task cannot be submitted for execution, either because this
     * executor has been shutdown or because its capacity has been reached,
     * the task is handled by the current {@code RejectedExecutionHandler}.
     * @param command the task to execute
     * @throws RejectedExecutionException at discretion of
     *         {@code RejectedExecutionHandler}, if the task
     *         cannot be accepted for execution
     * @throws NullPointerException if {@code command} is null
    public void execute(Runnable command) {
        if (command == null)
            throw new NullPointerException();
         * Proceed in 3 steps:
         * 1. If fewer than corePoolSize threads are running, try to
         * start a new thread with the given command as its first
         * task.  The call to addWorker atomically checks runState and
         * workerCount, and so prevents false alarms that would add
         * threads when it shouldn't, by returning false.
         * 2. If a task can be successfully queued, then we still need
         * to double-check whether we should have added a thread
         * (because existing ones died since last checking) or that
         * the pool shut down since entry into this method. So we
         * recheck state and if necessary roll back the enqueuing if
         * stopped, or start a new thread if there are none.
         * 3. If we cannot queue task, then we try to add a new
         * thread.  If it fails, we know we are shut down or saturated
         * and so reject the task.
        int c = ctl.get();
        if (workerCountOf(c) < corePoolSize) {
            if (addWorker(command, true))
            c = ctl.get();
        if (isRunning(c) && workQueue.offer(command)) {
            int recheck = ctl.get();
            if (! isRunning(recheck) && remove(command))
            else if (workerCountOf(recheck) == 0)
                addWorker(null, false);
        else if (!addWorker(command, false))


Tomcat’s thread pool differs from jdk’s use of unbounded LinkedBlockingQueue in two main points:

  • Jdk’s ThreadPoolExecutor’s thread pool growth strategy is: if the queue is a bounded queue, and if the threads in the thread pool cannot take tasks away in time, the work queue may be full and the inserted tasks will fail. At this time, the thread pool will urgently create new temporary threads to remedy the situation. Tomcat’s ThreadPoolExecutor uses taskQueue, which is unbounded, but it overwrites LinkedBlockingQueue’s offer method by taskQueue’s offer method, rewriting the rules so that it also follows jdk’s ThreadPoolExecutor’s bounded queue thread growth strategy.

  • In the thread pool of jdk’s ThreadPoolExecutor, when the core thread count+temporary thread count > maxSize, no new temporary thread can be created, and the RejectExecutionHanlder is turned to execute. Tomcat’s ThreadPoolExecutor rewrites this rule, that is, catch stops RejectExecutionHandler and continues to put it into the queue until the queue is full before throwing RejectExecutionHandler. The default taskQueue is unbounded.

Question: Since taskQueue is unbounded, where to control the tomcat server’s receiving request restrictions and how to protect itself. In addition, what is the relationship between acceptCount and maxConnections?