Re: [squid-users] low squid throughput/scalability

From: Kevin <kkadow@dont-contact.us>
Date: Sat, 11 Mar 2006 03:17:05 -0600

On 3/10/06, Mike Leong <leongmzlist@gmail.com> wrote:
> Hi,
>
> I'm using squid as an accelerator/reverse proxy server lots and lots of
> small files (~20K each)
>
> I used siege ( http://www.joedog.org/siege/ )to benchmark squid and got
> some pretty disappointing results: only 2.5mb/sec throughput.

Could you explain the design decisions behind the unusual RAID0 configuration?

My first thought is that the siege application is doing something odd.
 It appears the "-c" variable doesn't actually set concurrent
connections, as seen by the low concurrency value in the results.
Also,the 3 second "longest transaction" time may indicate a problem
with the network or TCP/IP layer.

One good test would be to configure an Apache or Athttpd server on the
Squid machine to serve up one ore more 20KB static files, and run the
same benchmark against that server directly. If Athttpd performs no
better than Squid, the performance issue is not Squid's.

Another possibility would be to try the same test using a different
tool, for example http_load -parallel 100 -seconds 300 urls.txt

> The squid server and test machine is connected via 100mbps link. During
> the test, the server cpu load is near zero. IO Wait is less than 10%. All
> the requests were TCP_OFFLINE_HITs, according to top, swap is not
> used. From teh benchmarks, it seems max throuput is 2.5mb/sec, kinda low
> for such a power server.
>
> Any comments/ideas?

When I see TCP throughput on a FastEthernet link top out at 25
megabits, the first thing I suspect is a duplex mismatch :)

Kevin
Received on Sat Mar 11 2006 - 02:17:08 MST

This archive was generated by hypermail pre-2.1.9 : Sat Apr 01 2006 - 12:00:04 MST