Re: [squid-users] SMP Squid and aufs Stores

From: Amos Jeffries <squid3_at_treenet.co.nz>
Date: Tue, 23 Jul 2013 14:20:55 +1200

On 23/07/2013 1:40 p.m., Golden Shadow wrote:
> Hi there!
>
> I have a TPROXY squid version 3.3.7, installed on a Dell server with 2 X 2.7 GHz CPU, each with 12 cores. The server has 192 GB RAM and around 8 TB disk storage. At the moment, cache manager reports the following:
>
> Number of clients accessing cache: 33
> Average HTTP requests per minute since start: 22829.1
> CPU Usage: 71.90% (Sometimes it reaches 85%)
>
>
> Squid is configured to fork only one worker. I'm thinking of enabling SMP and forking more squid workers to use more CPU cores. The problem is I don't want to use ROCK stores, because the maximum object size would then be 32 KB, knowing that I don't want to increase the system shared page size. Caching large objects (> 2 MB) by this squid is required.
>
>
> My question is: If I fork 2 or more squid workers, while sticking to the aufs cache_dirs I'm currently using, would that break my aufs stores in any way? I know aufs stores cannot be shared among the workers, and I guess there could be as many duplicate copies of some objects in the store as the number of workers, am I right? Would there be any other negative effects on the performance of squid? Do you recommend this configuration knowing that I don't want to use ROCK stores for the reasons mentioned above?

Squid workers operate much like running multiple copies of Squid side by
side on the same box. However components (like rock) which are SMP-aware
can share their info and reduce resource usage, improve performance, etc.

AUFS not being SMP-aware has the following issues:
  * Cannot share cache indexes, so yes, you end up with each dir
containing duplicate content.
  * We do not yet have an equivalent to ICP/HTCP sibling protocols to
route requests between the workers and reduce the duplication problem.
  * You can expect some drop in the HIT ratio as the "heat" of traffic
gets split between the workers. Overall it should not be a big
reduction, but some have foudn it so, YMMV.
  * You need to use ${process_number} or "if" directives in squid.conf
to ensure no two workers are touching the same AUFS cache - they will
corrupt each others content if they do. This is not exactly added by
SMP, it occurs in older Squid when another process touches the cache -
SMP just makes it easy to misconfigure.

If you have a consistent-but-large object size you may be interested in
the work being done to remove that size limitation from rock store.
http://wiki.squid-cache.org/Features/LargeRockStore

> I have another question, which is not related to squid. I can create new threads on squid-users mailing list, but I don't know how to reply to threads, hope I'm not looking so stupid here! I already subscribed to the mailing list but for some reason I don't get list messages on my email. Would you please tell me how I can reply to threads?

That sounds like you have a broken email client. The good ones have a
"Reply to List" button or plugin that adds one. Lacking that you have to
"Reply To All" and the list will be one of the recipients - it is polite
to remove the personal recipients if you are doing it this way.

HTH
Amos
Received on Tue Jul 23 2013 - 02:21:01 MDT

This archive was generated by hypermail 2.2.0 : Tue Jul 23 2013 - 12:00:40 MDT