[squid-users] Squid configuration for large objects

From: Michael Puckett <Michael.Puckett@dont-contact.us>
Date: Tue, 28 Sep 2004 10:05:21 -0700

I am having some difficulty with my squid implementation and have come
here to the squid experts for your help.

My application is to use squid to manage a small collection (10-12) of
large (around 2 GB) objects in the cache which don't change at high
frequency, and deliver these objects to around 40 clients spread across
4 network interfaces.

I have a dedicated dual processor Solaris machine with 2 GB of RAM and a
dedicated 36 GB cache drive and running a quad 100 Mb NIC. The objective
is to be able to run all 4 ports at switch speed and deliver about 11
MBytes/sec to each port.

I upped cache_mem to 1500 MB to fully utilize the RAM
I set store_avg_object_size 1 GB to decrease the structure overhead

I then use wget to fetch a 450 MB object.

This configuration can only deliver an aggregate throughput of about 20
MB/s, and supports 1 port a full speed, 2 ports at 10 MB/s and 4 ports
at 5 MB/s. This is less than half of the desired throughput.

I then tried setting maximum_object_size_in_memory to 512MB to get squid
to retain the object in memory. This worked, as I went from TCP_HIT to
MEM_HIT, but performance plummeted by about 20X, which was unexpected.

I have also rebuilt with threads and async i/o and run the aufs cache
type. This did improve performance some, but still not to the point of
full switch speed. I know the HW is capable of delivering the necessary
performance as it is possible to repeat the same test against apache
directly and there is no degradation in performance from 1 to 4 ports.
It delivers full switch speed to each port.

Can anyone recommend a large object configuration which would work in
this application? Perhaps a different memory replacement policy too?

Best regards,

-mikep

Received on Tue Sep 28 2004 - 11:06:28 MDT

This archive was generated by hypermail pre-2.1.9 : Fri Oct 01 2004 - 12:00:03 MDT