Re: [squid-users] Questions regarding COSS setup

From: Markus Meyer <markus.meyer_at_koeln.de>
Date: Tue, 26 Jan 2010 15:29:52 +0100

Hi all,

Sorry for the TOFU. I accidentally replied to Amos directly. Below the
two mails.

I set up a test server with COSS and AUFS. But I couldn't set the size
for the COSS files to 16 GB. After some tinkering with the file size I
found out that with a "block-size" of 1 KB you can have a maximum size
of 16373 MB. Adding an extra MB Squid will only tell you this:
FATAL: COSS cache_dir size exceeds largest offset

Now I'll do some testing and get back to you when I have results.

Thanks again for the help,

        Markus

> Markus Meyer wrote:
>> Amos Jeffries schrieb:
>>
>> Hi Amos,
>>
>>>> I want to let Squid do as less IO as possible. So I thought I
>>>> set "maximum_object_size_in_memory" to 4 kB and "max-size" for
>>>> COSS to 3x 8kB = 24 kB. The rest goes into AUFS.
>>> Looking at that 8KB will catch 20% more than 4KB would.
>>
>> Right, but our RAM is limited and this is some kind of trade off. I
>> want to get as much small files into RAM as possible. Reading tons
>> of 8 kB from disk is not as bad as reading tons of 4 kB files from
>> disk.
>>
>>>> cache_dir aufs /web/cache/1/aufs/ 81920 290 256 cache_dir aufs
>>>> /web/cache/2/aufs/ 81920 290 256
>>> add min-size=24576 to the AUFS to prevent them grabbing the small
>>> files intended for COSS.
>>
>> Ahh, good point. Thx
>>
>>>> - "--with-coss-membuf-size" compile-time option is set to 1 MB
>>>> per default. Does it make sense to change this value?
>>> AFAIK, no, but you may want to test that.
>>
>> Ok, so testing this is at the far end of my list ;)
>>
>>>> - How big should I make the COSS files? I thought about 20 GB
>>>> on four disks for COSS and 60 GB on the same disks for AUFS.
>> [...]
>>> More total size means more slices being swapped in and out to
>>> load rarer things. Larger slice size reduces that, but increases
>>> loading time.
>>
>> Slices are what is defined in "--with-coss-membuf-size"? I also
>> thought that with COSS the memory overhead is rising again because
>> of the memory buffers and asynchronous writing.
>
> Yes. Larger slice size means more memory for COSS. A certain number
> of these are held in memory at once to minimize disk delays further
> (10 by default). The larger they are the longer it takes to read one
> in from disk when a file is needed on it, the more memory the in-use
> set require etc.
>
>>
>>>> - How do I understand "block-size"? What values should I use? I
>>>> can't get my head around the docs in the Squid-Wiki.
>>> Block is equivalent to inodes as I understand it. Each file is
>>> stored in 1-N blocks. A block of 512 bytes storing a 12 byte
>>> file will waste 500 bytes. As will two blocks storing a 524 byte
>>> object.
>>>
>>> To reach your 20GB directory size you will need block-size=2048.
>>>
>>> Going by your object distribution I'd say thats probably
>>> workable. Though 1KB (dir size 16GB) would have less wastage.
>>
>> Then I start my tests with 16 GB and "block-size = 1 kB". Hangon,
>> you lost me here. How did you do the math?
>
> I cheated, the wiki page list under block-size said n=1024 gets you
> 16384 MB cache_dir size.
>
> The math AFAIK goes like this: 224 (absolute maximum count of objects
> in a single cache_dir) times 1 KB (block size. AKA minimum file
> size) = 16 GB (size of COSS cache_dir)
>
> It seems to me that should theoretically be the _minimum_. But I'm
> fuzzy on what the store code does still.
>
> Amos -- Please be using Current Stable Squid 2.7.STABLE7 or
> 3.0.STABLE21 Current Beta Squid 3.1.0.15
Received on Tue Jan 26 2010 - 14:31:41 MST

This archive was generated by hypermail 2.2.0 : Tue Jan 26 2010 - 12:00:04 MST