[squid-users] Re: What are recommended settings for optimal sharing of cache between SMP workers?

From: Dr.x <ahmed.zaeem_at_netstream.ps>
Date: Tue, 18 Feb 2014 06:15:19 -0800 (PST)

Amos Jeffries-2 wrote
> On 19/02/2014 12:12 a.m., Dr.x wrote:
>> im doubting ,
>> without smp with same traffic and same users i can save 40Mbps
>>
>> but in smp with combination of aufs with rock (32KB max obj size)
>> i can only save 20Mbps
>>
>>
>> im wondering does large rock will heal me ?
>>
>
> How many Squid processes are you currently needing to service those
> users traffic?
>
> If the number is >1 then the answer is probably yes.
>
> * Each worker should have same HIT ratio from AUFS cached objects. Then
> the shared Rock storage should increase HIT ratio some for workers which
> would not normally see those small objects.
>
>
>> or return to aufs and wait untill squid relase version that has bigger
>> object size ?
>>
>> bw saving is a big issue to me and must be done !!!
>>
>
> Your choice there.
>
> FYI: The upcoming Squid series with large-rock support is not planned to
> be packaged for another 3-6 months.
>
> HTH
> Amos

hi amos ,
i have about 900 req/sec , and i think i need 4 or 5 workers at maximum
i have 24 cores ,
from the old squid that was saving 40-45M i found mean object size
  Mean Object Size: *142.30 KB*

i found that 142KB is close to 100KB ,

i mean if i used large rock , will it enhace byte ratio !!!
do agree with me ?

now regardsing to use aufs with rock

now i have 5 aufs hardsisk each has conf file and aufs dir and max object
size

now , wt is the best implementation of smp ?

should i do if statements and map each worker with aufs process ?

im not sure which is best

sure u can give me advice to start ,

also , can i use large rock now ?
regards

-----
Dr.x

--
View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/What-are-recommended-settings-for-optimal-sharing-of-cache-between-SMP-workers-tp4664909p4664921.html
Sent from the Squid - Users mailing list archive at Nabble.com.
Received on Tue Feb 18 2014 - 14:15:38 MST

This archive was generated by hypermail 2.2.0 : Wed Feb 19 2014 - 12:00:06 MST