Re: [squid-users] Maximum disk cache size per worker

From: Amos Jeffries <squid3_at_treenet.co.nz>
Date: Fri, 22 Mar 2013 19:13:10 +1300

On 22/03/2013 4:39 p.m., Alex Rousskov wrote:
> On 03/21/2013 08:11 PM, Sokvantha YOUK wrote:
>
>> Thank you for your advice. If I want large files to be cached when it
>> fist seen by worker, My config should change to first worker that see
>> large file and cache it else left it over to remaining worker for rock
>> store worker?
> Your OS assigns workers to incoming connections. Squid does not control
> that assignment. For the purposes of designing your storage, you may
> assume that the next request goes to a random worker. Thus, each of your
> workers must cache large files for files to be reliably cached.
>
>
>> I don't want cached content to be duplicated
>> among AUFS cache_dir and I want to use the advantage of rock store
>> which can be shared within worker on SMP deployment.
> The above is not yet possible using official code. Your options include:
>
> 1. Do not cache large files.
>
> 2. Cache large files in isolated per-worker ufs-based cache_dirs,
> one ufs-based cache_dir per worker,
> suffering from false misses and duplicates.
> I believe somebody reported success with this approach. YMMV.
>
> 3. Cache large files in SMP-aware rock cache_dirs,
> using unofficial experimental Large Rock branch
> that does not limit the size of cached objects to 32KB:
> http://wiki.squid-cache.org/Features/LargeRockStore
>

4. Setup the SMP equivalent of a CARP peering hierarchy with the
frontend workers using shared rock caches and the backend using UFS.
This minimizes cache duplication. But in the current SMP code requires
disabling loop detection (probably not a good thing) and some advanced
configuration trickery.
If you want to actually go down that path let me know and I'll put the
details together.

Amos
Received on Fri Mar 22 2013 - 06:13:22 MDT

This archive was generated by hypermail 2.2.0 : Fri Mar 22 2013 - 12:00:05 MDT