Re: [squid-users] Maximum disk cache size per worker

From: Amos Jeffries <squid3_at_treenet.co.nz>
Date: Fri, 22 Mar 2013 02:07:50 +1300

On 21/03/2013 9:21 p.m., Sokvantha YOUK wrote:
> Dear All,
>
> I am working on distributed object cached by size to differences squid
> process. My server is Centos 6 x64 bits with squid 3.3.3.
> I don't know how to calculate maximum disk cache size squid can
> support per process.
>
> Below is my partition size:
> #df
> /dev/sdb1 206424760 65884 195873116 1% /cache1
> /dev/sdd1 206424760 60704 195878296 1% /cache3
> /dev/sde1 206424760 60704 195878296 1% /cache4
> /dev/sdf1 206424760 60704 195878296 1% /cache5
> /dev/sdg1 206424760 60704 195878296 1% /cache6
> /dev/sdh1 206424760 79192 195859808 1% /cache7
> /dev/sdi1 206424760 79200 195859800 1% /cache8
> /dev/sdc1 206424760 60704 195878296 1% /cache2
>
> I use 70% of partition size for squid cache_dir Mbytes size and
> worker=4 so my squid.conf is:
>
> workers 4
> ## 1. Handle small cache objects
> cpu_affinity_map process_numbers=1,2,3,4 cores=2,4,6,8
> cache_dir rock /cache1/squid 170000 max-size=31000
> max-swap-rate=300 swap-timeout=300
> cache_dir rock /cache2/squid 170000 max-size=31000
> max-swap-rate=300 swap-timeout=300
> cache_dir rock /cache3/squid 170000 max-size=31000
> max-swap-rate=300 swap-timeout=300
> cache_dir rock /cache4/squid 170000 max-size=31000
> max-swap-rate=300 swap-timeout=300
> cache_dir rock /cache5/squid 170000 max-size=31000
> max-swap-rate=300 swap-timeout=300
> cache_dir rock /cache6/squid 170000 max-size=31000
> max-swap-rate=300 swap-timeout=300
>
> ## 2. Handle large object > 32kb < 200MB. The fourth worker handles large file
> if ${process_number}=4
> cache_dir aufs /cache7/squid/${process_number} 170000 16 256
> min-size=31001 max-size=200000000
> cache_dir aufs /cache8/squid/${process_number} 170000 16 256
> min-size=31001 max-size=200000000
> endif
>
>
> My question is:
> 1. At section #2, Will the fourth workers be able to handle cache_size
> larger than 300GB?

There are three limits on cache size:

1) The main limit is available RAM.
http://wiki.squid-cache.org/SquidFaq/SquidMemory

2) Stored object count. 2^27-1 objects per directory. No exceptions. If
you fill that 200GB with billions of 10 byte objects Squid will run out
of file numbers before the space is full. But if you try to fill it with
a lot of 10GB objects Squid will use it all completely and want more.

3) 32-bit and 64-bit filesystem limitations. Squid can't magically use
more disk space than your OS can supply.

> 2. Section #1, worker number 1 will have 170Gbx6=1020Gb (cache dir
> size)? Will it be able to handle this much cache dir size?

Does it have the RAM?
Rock store is a little unusual in that it has a fixed slot size so the
file count is a known number and less fuzziness on the RAM requirements
for indexing.
   (170000/31000) x 6 ==> ~33 GB cache index + all the other regular
operating memory. Say 34-36 GB of RAM required by the worker.

Amos
Received on Thu Mar 21 2013 - 13:08:00 MDT

This archive was generated by hypermail 2.2.0 : Fri Mar 22 2013 - 12:00:05 MDT