The original article can be found here.
Part four concludes this series of articles regarding concurrency and accepting server-side, socket connections. In this final article, the concepts that have been introduced in the first three articles are used to analyze the behavior of the TomCat Web Container in a JBoss J2EE Application Server JVM.
TomCat Processor Threads Pool In The JBoss 3.2 Container
The Servlet Container used in JBoss is TomCat. TomCat is the JSP/Servlet Container from the Apache project; it provides JSP & Servlet processing capabilities to the JBoss J2EE container.
JBoss/TomCat maintains a pool of “processor” threads that accept incoming HTTP connections. This pool of Processor Threads is the work horse of the JBoss Application Server web applications. In a thread dump from a JBoss 3.2 container, threads similar to the following would be present:
At any given moment, one of the threads in this pool is listening on the HTTP port (8080 by default); the others are waiting in a Java threading Monitor lock mechanism that is implemented as a futex() system call on Linux-details at this level will be different on each Operating System. In the following thread dump output, you can see one thread that is currently waiting in an ServerSocket().accept() call and a representative of the rest of the thread pool.
“http-0.0.0.0-8080-Processor25” daemon prio=1 tid=0xa52c21c0 nid=0x1525 runnable [0xa3ee6000..0xa3ee6e30]
at java.net.PlainSocketImpl.socketAccept(Native Method)
– locked <0xac073298> (a java.net.SocksSocketImpl)
“http-0.0.0.0-8080-Processor24” daemon prio=1 tid=0xa52c9428 nid=0x1524 in Object.wait() [0xa3f67000..0xa3f680b0]
at java.lang.Object.wait(Native Method)
– waiting on <0xae28b670> (a org.apache.tomcat.util.threads.ThreadPool$ControlRunnable)
– locked <0xae28b670> (a org.apache.tomcat.util.threads.ThreadPool$ControlRunnable)
The other thread pool members are in a “waiting on monitor entry” state, but notice that they are not waiting to enter the PlainSocketImpl.socketAccept() method like the previous example, but are waiting to enter a syncronized block in tomCat code. This is an example of the Leader-Follower Pattern being implemented in the TomCat handler thread pool. Although, if this syncronization was removed, in theory, the incoming request processesing (calling accept()) functionality would roughly be equivalent–only the syncronization would happen further up the stack. There could easily be other internal-TomCat details that require this syncronization. The important point is that the Leader-Follower Pattern is being implemented in the application code and the program is not relying on the behavior we have observed from the Operating System.
The TomCat implementation of a single large pool of handler threads to handle incoming requests for the Web Container is probably the most straightforward way of implementing a Web Container. The JRun J2EE container also implements its Web Container in this fashion.
In contrast, Oracle Weblogic (formerly BEA Weblogic) v8.1 implements a small pool of “reader threads” that accept incoming connections. These threads place the request on an execution queue which feeds a larger pool of handler threads.
I find the limitations imposed on Java code in the java.net.PlainSocketImpl class interesting. The thinking behind this is to ensure that the Java networking API behaves the same way across all implementation platforms (from my understanding). It is possible to let the operating system kernel deal with the concurrency issues of multiple threads calling the accept() system call on the same socket, but this is not a recommended best practice. The best practice is implementing syncronization in application code above the kernel networking implementation and the JVM networking code implementation.
This same best practice holds true for C/C++ server-side networking code. But, on some platforms what has been presented here can work. But, that might not be the case with all platforms.