Re: Performance degredation in recent Squid releases

From: Amos Jeffries <squid3_at_treenet.co.nz>
Date: Fri, 21 Jan 2011 15:29:00 +1300

On 21/01/11 09:59, Phil Oester wrote:
> I recently upgraded a proxy farm from 2.7 to 3.1, and saw
> a huge performance drop. For validation purposes, I setup
> a test box with a local webserver and different versions of
> Squid to compare performance. The test involved transferring
> a single 500MB file (output to /dev/null). I also included
> tinyproxy and Apache for comparison purposes. Results are
> as follows (average of 5 runs):
>
> 3.2.0.4: 43 MB/s
> 3.1.10: 47 MB/s
> 3.0 stable 25: 53 MB/s
> 2.7 stable 9: 120 MB/s
> 2.6 stable 23: 123 MB/s
> tinyproxy 1.6.5: 146 MB/s
> apache 2.2.17: 210 MB/s
> direct: 725 MB/s
>
> As one can see, the drop from 2.x to 3.x is huge, likely due
> to the C++ refactoring. And each successive 3.x version
> has continued the downward trend.

Single 500MB file with nothing else going on?

>
> So my question is whether any work is being done on improving
> performance of the 3.x series?

It is. And 3.1 is getting as many small improvements as I dare port back.
So far the ones going into 3.1 and 3.2 have centered around HTTP/1.1
compliance. Raising the performance of persistent connections.

The stuff going into 3.2 has been all of that plus code cleanups with an
eye on reducing data copying and useless code.

NOTE: on a single transfer these optimizations would be negligible
improvement. On a few hundred small requests its showing me over 200%
speed up since 3.0.

> The performance drop of> 50%
> is quite noticable for end users. Is there anything I can
> do to help?
>

This test and contribution has already been a great help. Thank you.

It has shown a surprising reduction in performance for 3.2. All the
other metrics supplied have shown a major drop between squid-2 and 3.0,
a further drop into early 3.1 which has since been clawed back.
  There is 30% extra CPU usage we want to get rid of which may be
contributing to this MB/s loss. I'm thinking off the top of my head this
problem you see is probably due to more event overhead while tunneling
the body.

Optimizing this body transfer part of the transaction is all about
optimizing the TCP chunk sizes being pumped.

IIRC we have a problem with the reply read buffer holding itself at a
small fixed packet size (1KB or 4KB). Ideally for large files it would
auto-size to allow bigger hunks of input with every event cycle. If you
want to look into whether that is still present and why it would be
helpful. The code is centered around ServerStateData::replyBodySpace in
Server.cc

Amos

-- 
Please be using
   Current Stable Squid 2.7.STABLE9 or 3.1.10
   Beta testers wanted for 3.2.0.4
Received on Fri Jan 21 2011 - 02:29:10 MST

This archive was generated by hypermail 2.2.0 : Fri Jan 21 2011 - 12:00:20 MST