Re: [squid-users] Maximum disk cache size per worker

From: Sokvantha YOUK <sokvantha_at_gmail.com>
Date: Fri, 22 Mar 2013 09:20:21 +0700

Dear Amos,

Thank you for your advice. I get it working now after shrink the
cache_dir size from 170GB to 70 GB. My total physical memory is 32 GB.

---
Regards,
Vantha
On Thu, Mar 21, 2013 at 8:07 PM, Amos Jeffries <squid3_at_treenet.co.nz> wrote:
> On 21/03/2013 9:21 p.m., Sokvantha YOUK wrote:
>>
>> Dear All,
>>
>> I am working on distributed object cached by size to differences squid
>> process. My server is Centos 6 x64 bits with squid 3.3.3.
>> I don't know how to calculate maximum disk cache size squid can
>> support per process.
>>
>> Below is my partition size:
>> #df
>> /dev/sdb1            206424760     65884 195873116   1% /cache1
>> /dev/sdd1            206424760     60704 195878296   1% /cache3
>> /dev/sde1            206424760     60704 195878296   1% /cache4
>> /dev/sdf1            206424760     60704 195878296   1% /cache5
>> /dev/sdg1            206424760     60704 195878296   1% /cache6
>> /dev/sdh1            206424760     79192 195859808   1% /cache7
>> /dev/sdi1            206424760     79200 195859800   1% /cache8
>> /dev/sdc1            206424760     60704 195878296   1% /cache2
>>
>> I use 70% of partition size for squid cache_dir Mbytes size and
>> worker=4 so my squid.conf is:
>>
>> workers 4
>> ## 1. Handle small cache objects
>> cpu_affinity_map process_numbers=1,2,3,4 cores=2,4,6,8
>> cache_dir  rock /cache1/squid   170000 max-size=31000
>> max-swap-rate=300 swap-timeout=300
>> cache_dir  rock /cache2/squid   170000 max-size=31000
>> max-swap-rate=300 swap-timeout=300
>> cache_dir  rock /cache3/squid   170000 max-size=31000
>> max-swap-rate=300 swap-timeout=300
>> cache_dir  rock /cache4/squid   170000 max-size=31000
>> max-swap-rate=300 swap-timeout=300
>> cache_dir  rock /cache5/squid   170000 max-size=31000
>> max-swap-rate=300 swap-timeout=300
>> cache_dir  rock /cache6/squid   170000 max-size=31000
>> max-swap-rate=300 swap-timeout=300
>>
>> ## 2. Handle large object > 32kb < 200MB. The fourth worker handles large
>> file
>> if ${process_number}=4
>> cache_dir  aufs /cache7/squid/${process_number}         170000 16 256
>> min-size=31001 max-size=200000000
>> cache_dir  aufs /cache8/squid/${process_number}         170000 16 256
>> min-size=31001 max-size=200000000
>> endif
>>
>>
>> My question is:
>> 1. At section #2, Will the fourth workers be able to handle cache_size
>> larger than 300GB?
>
>
> There are three limits on cache size:
>
> 1) The main limit is available RAM.
> http://wiki.squid-cache.org/SquidFaq/SquidMemory
>
> 2) Stored object count. 2^27-1 objects per directory. No exceptions. If you
> fill that 200GB with billions of 10 byte objects Squid will run out of file
> numbers before the space is full. But if you try to fill it with a lot of
> 10GB objects Squid will use it all completely and want more.
>
> 3) 32-bit and 64-bit filesystem limitations. Squid can't magically use more
> disk space than your OS can supply.
>
>
>
>> 2. Section #1, worker number 1 will have 170Gbx6=1020Gb (cache dir
>> size)? Will it be able to handle this much cache dir size?
>
>
> Does it have the RAM?
> Rock store is a little unusual in that it has a fixed slot size so the file
> count is a known number and less fuzziness on the RAM requirements for
> indexing.
>   (170000/31000) x 6  ==>  ~33 GB cache index + all the other regular
> operating memory. Say 34-36 GB of RAM required by the worker.
>
> Amos
-- 
----
Regards,
Vantha
Received on Fri Mar 22 2013 - 02:20:27 MDT

This archive was generated by hypermail 2.2.0 : Fri Mar 22 2013 - 12:00:05 MDT