[squid-users] Re: Huge Cache / Memory Usage

From: Sunny <sunyucong_at_gmail.com>
Date: Wed, 15 Dec 2010 11:43:09 -0800

To clarify,

I am running a forward cache, targeted to reduce egress on my link for
~100 users.

Cheers.

On Wed, Dec 15, 2010 at 11:17 AM, Sunny <sunyucong_at_gmail.com> wrote:
> Hi there,
>
> I am working on building a cache with squid 3.1.9.  I've got two
> machine with 4G ram and two 500G disk each. I want to make cache as
> large as possible to maximize utilization of my two big disk.
>
> However, I soon found out I am being extremely limited by memory. lots
> of swapping starts to happen when my cache exceed 9M objects. Also
> everytime I want to restart cache, it would spend a hour just to
> rescan all the entities into memory. and it just keep taking longer.
> And From iostat -x -d , my two disk utilization is often below 5%
> during scan and serving, which is kind of a waste.
>
> from some doc, I found statement that squid needs 14M (on 64 bit) for
> each 1G on disk. If that's the case, to fill 500G disk I would need
> ~8G ram just to hold the metadata.
>
> So my question is:
>
> 1. Is this statement true? Can squid somehow lookup directly on the
> disk to imporve disk utilization and reduce memory needs?
> 2. How big the cache people usually have? I think having a 500G cache
> will definitely improve hit ratio and byte hit ratio, is that true?
> 3. what other optimization is needed for building huge cache?
>
> Thanks in advance.
>
Received on Wed Dec 15 2010 - 19:43:33 MST

This archive was generated by hypermail 2.2.0 : Thu Dec 16 2010 - 12:00:03 MST