[1815617 views]

[]

Odi's astoundingly incomplete notes

New entries | Code

Tuning mod_proxy and Jetty

Running a classic setup with Apache as a reverse proxy in front of a Jetty server, I ran into a problem. Apparantly most HTTP threads of Jetty were blocked for long period of times in the following stack within the AJP connector, and no threads were available for the HTTP connector:
   java.lang.Thread.State: RUNNABLE
	at java.net.SocketInputStream.socketRead0(Native Method)
	at java.net.SocketInputStream.read(SocketInputStream.java:129)
	at org.eclipse.jetty.io.ByteArrayBuffer.readFrom(ByteArrayBuffer.java:388)
	at org.eclipse.jetty.io.bio.StreamEndPoint.fill(StreamEndPoint.java:132)
	at org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.fill(SocketConnector.java:209)
	at org.eclipse.jetty.ajp.Ajp13Parser.fill(Ajp13Parser.java:203)
	at org.eclipse.jetty.ajp.Ajp13Parser.parseNext(Ajp13Parser.java:265)
	at org.eclipse.jetty.ajp.Ajp13Parser.parseAvailable(Ajp13Parser.java:150)
	at org.eclipse.jetty.server.HttpConnection.handle(HttpConnection.java:411)
	at org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:241)
	at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:529)
	at java.lang.Thread.run(Thread.java:662)
The setup is: I was a bit surprised that the AJP calls could so easily starve the HTTP calls. Given that HTTP keep-alive and unreliable OTA connections don't mix well, this makes an interesting test.

Test Setup
To find the configuration problem I ran some tests on a local machine: The clients are set to misbehave: after the request is received wait for 60 seconds to close the socket. This is to emulate broken network connections: they never terminate correctly. The clients could be configured to either do HTTP 1.1 keep-alive or not.

Jetty 7.4.3 config: Several Apache configurations were tested: mod_proxy_ajp
ProxyPass / ajp://localhost:8009/ min=1 ttl=20 acquire=10000 timeout=60
MaxClients keep-alive Time [s] timeouts?
 150 no 60.1 yes
30 no 11.4 no
 150 yes  74.7 yes
 30 yes 70.3 yes


mod_proxy_http
ProxyPass / http://localhost:8080/ min=1 ttl=20 acquire=10000 timeout=60
MaxClients keep-alive Time [s] timeouts?
150 no 2.0 no
30 no 6.0 no
150 yes 19.8 no
30 yes 66.0 no

JBoss
For comparison I ran the same test against an old JBoss 4.0.4 which uses Tomcat 5.5.

mod_proxy_ajp
ProxyPass / ajp://localhost:8009/ min=1 ttl=20 acquire=10000 timeout=60
MaxClients keep-alive Time [s] timeouts?
 150 no 14.5 no
30 no 9.4 no
 150 yes  33.8 no
 30 yes 68.5 no


mod_proxy_http
ProxyPass / http://localhost:8080/ min=1 ttl=20 acquire=10000 timeout=60
MaxClients keep-alive Time [s] timeouts?
150 no 2.0 no
30 no 6.0 no
150 yes 21.6 no
30 yes 66.0 no


Conclusions
There is no benefit of using AJP instead of HTTP, the opposite is true. AJP really hurts. I have observed client timeouts only with AJP, never with HTTP. HTTP also was always faster then AJP. The AJP issues are mostly due to a bad implementation in Jetty as (JBoss') Tomcat seems to behave a bit better. But probably mod_proxy_ajp is to blame too, it seems to do something stupid with multiplexing the Apache threads to the AJP backend pool.

This test reveals striking differences between AJP and HTTP and how they behave with connection pools: This has horrible consequences:
You can not mix HTTP and AJP connectors on the same Apache instance. Because there is no configuration that will work for both. Given the AJP numbers are so much worse than HTTP, and the behaviour with pools is so anti-intuitive, I recommend not to use mod_proxy_ajp at all.
It is advisable to disable keep-alive in scenarios with OTA connections. You can use mod_headers:
RequestHeader set Connection "close"


posted on 2012-10-16 14:29 UTC in Code | 0 comments | permalink