Re: [squid-users] Cache_dir more than 10GB

From: Amos Jeffries <squid3_at_treenet.co.nz>
Date: Mon, 13 Oct 2008 13:58:05 +1300 (NZDT)

> Hi,
>
> I reviewed Squid filemap code and it's clear that in some cases
> a large cache will have high CPU load.
> filemap.c function file_map_create function starts with 2^13 elements
> and expands element number only after the list is full.
> So for example if cached objects are always slightly below 2^23 and bitmap
> size is 2^23 it will take a lot of CPU to find next free bit.
>
> Itzcak

Just an idea...
But it looks possible at add a binary chop between the word-detection loop
and the bit-detection loop to chop the word and seed the 'bit' variable.
Thats a gain of n/2 tests immediately.

/2c
Amos

>
> On Mon, Oct 6, 2008 at 1:05 PM, Henrik Nordstrom
> <henrik_at_henriknordstrom.net> wrote:
>> On sön, 2008-10-05 at 16:38 +0200, Itzcak Pechtalt wrote:
>>> When Squid reach several millions of objects per cache dir, it start
>>> to be very CPU consumer, becuae every insertion and deletion of object
>>> takes long time.
>>
>> Mine don't.
>>
>>> On my Squid 80-100GB had the CPU consumption effect.
>>
>> That's a fairly small cache.
>>
>> The biggest cache I have been running was in the 1.5TB range, split over
>> a number of cache_dir, about 130GB each I think.
>>
>> But it is important you keep the number of objects per cache_dir well
>> below 2^24. Preferably not more than 2^23.
>>
>>
>> What I think is that you got bitten by something else than cache size..
>>
>> Regards
>> Henrik
>>
>
Received on Mon Oct 13 2008 - 00:58:09 MDT

This archive was generated by hypermail 2.2.0 : Mon Oct 13 2008 - 12:00:02 MDT