[squid-users] Re: Large rock problem

From: Ayham Abou Afach <a.afach_at_hybridware.co>
Date: Tue, 03 Dec 2013 18:19:11 +0200

Hi Alex

On 12/02/2013 05:34 PM, Alex Rousskov wrote:
> On 12/02/2013 02:53 AM, Ayham Abou Afach wrote:
>> On 12/01/2013 1043 PM, Alex Rousskov wrote
>>> On 12/01/2013 0335 AM, Ayham Abou Afach wrote
>>>> um testing Squid bzr with large rock support and facing the following
>>>> problems
>
>>> Which bzr branch? To test Large Rock support, you should be using
>>>collapsed-fwd
>
>
>> Yes Alex um using the large rock branch
>
> Just wanted to make sure because the collapsed-fwd branch above (the one
> you should be using) is not a "large rock branch" you might be referring
> to (which also exists but lacks critical fixes and improvements in the
> collapsed forwarding branch you should be using).
>

sorry alex i think i was using the wrong one
      large-rock

so i should first redo my test on the new one and then continue with the
post .
but why the large rock branch which is refereed from the large rock wiki
is old ??

>
>>>> -I can't create rock DIR with more than 32768 slot size . after
>>>> increasing this value i got error with assertion failed
>>>
>>> Why do you want to increase the slot size? The 32KB slot size limit
does
>>> not limit the size of the responses that Large Rock can store. It comes
>>> from shared memory-related limits unrelated to caching. In most cases,
>>> you should not specify the slot size to let Squid pick the "best" one.
>>
>> um using rock dirs more than 500G so when i use the default one ( 16K )
>> i got an error that um wasting disk space (related to object count *
>> slot size ) so i increased slot size to support more size
>
> You are essentially hitting a different limit then (also mostly
> unrelated to Rock) -- Squid limits the number of cache_dir entries to
> about 2^25 (33554431). Since each Rock slot can be occupied by a
> complete entry, using Rock dirs larger than 2^25*slot-size leads to
> wasted disk space. Ufs-based caches have the same limit but do not check
> for it (reaching the limit was not likely when the UFS code was written).
>
> If you want to maximize disk space usage without changing Squid code,
> use 32KB slots and set your cache_dir size to the minimum of usable disk
> space and 1048575 MBytes.
>
> If your disks are larger than 1TB, then you may also create multiple
> cache_dirs per disk, but I recommend trying that only _after_ you got
> everything else working. It is best to start with one cache_dir or, if
> you are eager to skip simpler steps, one cache_dir per cache disk.
>
>
>>>> -after starting squid with any rock store it starts using it for some
>>>> time ( 1 or 2 hours ) and then it stops reading or writing any
thing to
>>>> disk ( disk hit 0 ) and it seems like the rock dir is disconnected , i
>>>> need to restart the proxy to see the disk again
>
>
>>> Any potentially related warnings or errors in cache.log? What does
cache
>>> manager mgr:storedir page say before and after that "hit 0" happens?
>>> How do you know that Squid is not writing anything to disk?
>
>
>> squidclient storedir gives empty values
>>
>> by kid1
>> Store Directory Statistics
>> Store Entries 19742
>> Maximum Swap Size 0 KB
>> Current Store Swap Size 0.00 KB
>> Current Capacity 0.00 used, 0.00 free
>
>> and the same for the other kids
>
> Your Squid does not use any cache_dirs. I will need to repeat my earlier
> question: Any potentially related warnings or errors in cache.log?
>
> Also, I asked for "before" and "after" mgr:storedir snapshots. Is the
> one you posted before or after? Or do both of those snapshots show zero
> maximum swap size?
>
>
> Thank you,
>
> Alex.
>
Received on Tue Dec 03 2013 - 16:19:20 MST

This archive was generated by hypermail 2.2.0 : Tue Dec 03 2013 - 12:00:05 MST