[PATCH] Optimization: Use bigger buffer for server reads

From: Alex Rousskov <rousskov_at_measurement-factory.com>
Date: Thu, 31 Mar 2011 11:19:06 -0600

Hello,

    This patch changes the server read buffer limits to 16KB minimum and
256KB maximum. They used to be: 2KB and 2GB. And before r9766: 4KB and
SQUID_TCP_SO_RCVBUF.

Trunk r9766 (Remove limit on HTTP headers read) made the default HTTP
server read buffer size 2KB instead of 4KB, visibly slowing down Squid
when kernel network buffers are full and can sustain larger Squid reads.
Doing up to twice as many network reads is expensive (and probably not
just because of the extra system call overheads).

We never grow that buffer size if the _parser_ does not need a bigger
buffer: Even if the HTTP client is slower than the server, the buffer
stays small because it gives all the data to Store and Store eventually
just stalls reading via delayAwareRead() and read_ahead_gap. The
situation may be different with RESPMOD, but if the adaptation service
is fast, the buffer would still not grow.

This change does not reset the minimum buffer size to the old 4KB
default because memory is much cheaper compared to the days where that
default was set. 8KB may have worked too, but with 12KB median typical
response size a larger buffer may be a good idea for a busy Squid. More
performance work is needed to find the optimal value (which could depend
on the environment).

This change does not set the maximum buffer size to the current 2GB
limit because we have not tested how header and chunking parsers would
cope with malicious messages trying to run Squid out of RAM; and also
because no good parser should need that much lookahead space. Is 256KB
enough for all legitimate real-world response headers? We do not know.

It is tempting to use Config.tcpRcvBufsz or SQUID_TCP_SO_RCVBUF to find
the right minimum or maximum buffer size, but those parameters deal with
low-level TCP buffering aspects while this buffer deals with HTTP
parsing.

This change has been tested in production environments.

I think it would be nice to get this optimization into the upcoming
v3.2.0.6 release as it may help, albeit partially, with addressing
performance concerns that have been raised here.

Thank you,

Alex.

Received on Thu Mar 31 2011 - 17:19:25 MDT

This archive was generated by hypermail 2.2.0 : Fri Apr 01 2011 - 12:00:05 MDT