Odi's astoundingly incomplete notes
New entries | CodeTuning mod_proxy and Jetty
Running a classic setup with Apache as a reverse proxy in front of a Jetty server, I ran into a problem. Apparantly most HTTP threads of Jetty were blocked for long period of times in the following stack within the AJP connector, and no threads were available for the HTTP connector:
Test Setup
To find the configuration problem I ran some tests on a local machine:
Jetty 7.4.3 config:
ProxyPass / ajp://localhost:8009/ min=1 ttl=20 acquire=10000 timeout=60
mod_proxy_http
ProxyPass / http://localhost:8080/ min=1 ttl=20 acquire=10000 timeout=60
JBoss
For comparison I ran the same test against an old JBoss 4.0.4 which uses Tomcat 5.5.
mod_proxy_ajp
ProxyPass / ajp://localhost:8009/ min=1 ttl=20 acquire=10000 timeout=60
mod_proxy_http
ProxyPass / http://localhost:8080/ min=1 ttl=20 acquire=10000 timeout=60
Conclusions
There is no benefit of using AJP instead of HTTP, the opposite is true. AJP really hurts. I have observed client timeouts only with AJP, never with HTTP. HTTP also was always faster then AJP. The AJP issues are mostly due to a bad implementation in Jetty as (JBoss') Tomcat seems to behave a bit better. But probably mod_proxy_ajp is to blame too, it seems to do something stupid with multiplexing the Apache threads to the AJP backend pool.
This test reveals striking differences between AJP and HTTP and how they behave with connection pools:
You can not mix HTTP and AJP connectors on the same Apache instance. Because there is no configuration that will work for both. Given the AJP numbers are so much worse than HTTP, and the behaviour with pools is so anti-intuitive, I recommend not to use mod_proxy_ajp at all.
It is advisable to disable keep-alive in scenarios with OTA connections. You can use mod_headers:
java.lang.Thread.State: RUNNABLE at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:129) at org.eclipse.jetty.io.ByteArrayBuffer.readFrom(ByteArrayBuffer.java:388) at org.eclipse.jetty.io.bio.StreamEndPoint.fill(StreamEndPoint.java:132) at org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.fill(SocketConnector.java:209) at org.eclipse.jetty.ajp.Ajp13Parser.fill(Ajp13Parser.java:203) at org.eclipse.jetty.ajp.Ajp13Parser.parseNext(Ajp13Parser.java:265) at org.eclipse.jetty.ajp.Ajp13Parser.parseAvailable(Ajp13Parser.java:150) at org.eclipse.jetty.server.HttpConnection.handle(HttpConnection.java:411) at org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:241) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:529) at java.lang.Thread.run(Thread.java:662)The setup is:
- have many Apache threads
- Apache serves some static content and downloads
- Jetty serves the dynamic content
- Jetty has a limited thread pool to accomodate the max allowable load
- the clients are all application clients (not web browsers typically)
- the clients are mostly mobile with more or less shaky over the air connectivity (lots of timeouts, broken connections)
Test Setup
To find the configuration problem I ran some tests on a local machine:
- 100 clients making a simple dynamic request in parallel
- measure the time to complete all requests
Jetty 7.4.3 config:
- QueuedThreadPool.maxThreads = 30
- SelectChannelConnector.maxIdleTime = 300000
- Ajp13SocketConnector.maxIdleTime = 60000
- Apache 2.2.23
- mod_proxy_http
- mod_proxy_ajp
- differen values for MaxClients
ProxyPass / ajp://localhost:8009/ min=1 ttl=20 acquire=10000 timeout=60
MaxClients | keep-alive | Time [s] | timeouts? |
---|---|---|---|
150 | no | 60.1 | yes |
30 | no | 11.4 | no |
150 | yes | 74.7 | yes |
30 | yes | 70.3 | yes |
mod_proxy_http
ProxyPass / http://localhost:8080/ min=1 ttl=20 acquire=10000 timeout=60
MaxClients | keep-alive | Time [s] | timeouts? |
---|---|---|---|
150 | no | 2.0 | no |
30 | no | 6.0 | no |
150 | yes | 19.8 | no |
30 | yes | 66.0 | no |
JBoss
For comparison I ran the same test against an old JBoss 4.0.4 which uses Tomcat 5.5.
mod_proxy_ajp
ProxyPass / ajp://localhost:8009/ min=1 ttl=20 acquire=10000 timeout=60
MaxClients | keep-alive | Time [s] | timeouts? |
---|---|---|---|
150 | no | 14.5 | no |
30 | no | 9.4 | no |
150 | yes | 33.8 | no |
30 | yes | 68.5 | no |
mod_proxy_http
ProxyPass / http://localhost:8080/ min=1 ttl=20 acquire=10000 timeout=60
MaxClients | keep-alive | Time [s] | timeouts? |
---|---|---|---|
150 | no | 2.0 | no |
30 | no | 6.0 | no |
150 | yes | 21.6 | no |
30 | yes | 66.0 | no |
Conclusions
There is no benefit of using AJP instead of HTTP, the opposite is true. AJP really hurts. I have observed client timeouts only with AJP, never with HTTP. HTTP also was always faster then AJP. The AJP issues are mostly due to a bad implementation in Jetty as (JBoss') Tomcat seems to behave a bit better. But probably mod_proxy_ajp is to blame too, it seems to do something stupid with multiplexing the Apache threads to the AJP backend pool.
This test reveals striking differences between AJP and HTTP and how they behave with connection pools:
- mod_proxy_ajp only works well if the Apache and Jetty connection pools have the same size. A larger the difference make the performance worse.
- mod_proxy_http works better the larger the Apache connection pool is.
You can not mix HTTP and AJP connectors on the same Apache instance. Because there is no configuration that will work for both. Given the AJP numbers are so much worse than HTTP, and the behaviour with pools is so anti-intuitive, I recommend not to use mod_proxy_ajp at all.
It is advisable to disable keep-alive in scenarios with OTA connections. You can use mod_headers:
RequestHeader set Connection "close"
Add comment