Re: [squid-users] Recommended Store Size

From: Nyamul Hassan <mnhassan_at_usa.net>
Date: Wed, 26 Nov 2008 22:22:26 +0600

Thank you Chris for your valualbe info. Sorry about asking the "should I
increase store size" question. It was a bit on the "duh" side. :)

Is there a measurement inside the squid counters that tells me the bytes of
data transferred? Like you said it was 150GB for a day for your setup. I
was wondering where to see this in Squid.

Regards
HASSAN

----- Original Message -----
From: "Chris Robertson" <crobertson_at_gci.net>
To: "Squid Users" <squid-users_at_squid-cache.org>
Sent: Wednesday, November 26, 2008 03:13
Subject: Re: [squid-users] Recommended Store Size

> Nyamul Hassan wrote:
>> Thx Chris. Cost of hardware does not become a big factor here, as it is
>> directly related to the amount of BW that we save, and also the customer
>> experience of getting pages faster from the cache.
>>
>> After looking many of the threads here, I've found that some guys are
>> using cache stores measured in terabytes. I was wondering if a bigger
>> store was going to improve the "byte hit ratio", which seems to give the
>> idea of how much BW was saved.
>
> It won't reduce it. :o) If you want to increase the byte hit ratio
> change your cache_replacement_policy to "heap LFUDA" and increase your
> maximum_object_size. Be sure your squid is compiled with the
> "--enable-removal-policies=" option specifying heap as one of the choices.
> My compilation options are below...
>
> -bash-3.2$ /usr/local/squid/sbin/squid -v
> Squid Cache: Version 2.7.STABLE5
> configure options: '--enable-stacktraces' '--enable-snmp'
> '--enable-removal-policies=heap,lru' '--enable-storeio=aufs,null,ufs'
> '--with-pthreads' '--enable-err-languages=English'
>
>>
>> If I wanted to increase my store size by adding a JBOD of 12 disks using
>> eSATA, and put another 12 x 160 GB sata disks, and also putting 130GB on
>> each disk, making a total 2 TB cache store, would that improve the hit
>> ratio?
>
> Be aware. For each TB of disk space, you might need up to 10GB of RAM to
> track the objects. I'm pretty sure that calculation is based on a 20KB
> (or so) mean object size, but it's something to keep in mind. Don't go
> all-out on increasing the storage size without keeping an eye on the
> associated memory usage.
>
>>
>> I understand that patterns of user behavior greatly changes the "hit
>> ratio", as we ourselves see it drop during off-peak hours (late into the
>> night), as users who are online probably visit more and more diverse web
>> content. I just wanted to check how all the guys out here who are using
>> Squid as a "forward proxy" are doing in terms of saving BW, and for
>> "regular broadband internet users", how much BW they were saving with how
>> big of a cache store.
>
> As for myself, I currently have one main cache with 8GB of RAM and ~720GB
> of store dir (180GB on 4 Seagate ST3500320AS spindles, 630GB used). The
> Squid process size is 3.8GB (according to top). I "suffer" a pretty
> serious 50% wait state and load spikes above 8, but my hit response time
> is sub 30ms under peak load (misses hover around 120ms). On a typical
> weekday my cache passes around 150GB of traffic to clients, and about 10GB
> each day of the weekend, for a weekly total of just under 800GB of traffic
> per week. Using heap LFUDA for my cache_replacement_policy with a maximum
> object size of 1GB I saw a 25% request hit rate and 24% byte hit rate on
> yesterday's traffic. Friday's traffic was at 25% and 21% respectively,
> and seems more typical.
>
>>
>> Thanks once again for your response, and hope you and the guys running
>> squid as I am would share some of their experiences.
>>
>> Regards
>> HASSAN
>
> Chris
>
>
Received on Wed Nov 26 2008 - 16:22:53 MST

This archive was generated by hypermail 2.2.0 : Thu Nov 27 2008 - 12:00:03 MST