Re: [squid-users] squid not storing objects to disk andgettingRELEASED on the fly

From: Rajkumar Seenivasan <rkcp613_at_gmail.com>
Date: Fri, 24 Sep 2010 13:16:00 -0400

Hello Amos,
see below for my responses... thx.

>>>> ? 50% empty cache required so as not to fill RAM? => cache is too big or RAM not enough.
cache usage size is approx. 6GB per day.
We have 15GB of physical memory on each box and the cache_dir is set for 20GB.
I had cache_swap_low 65 and cache_swap_high 70% and the available
memory went down to 50MB out of 15GB when the cache_dir used was 14GB
(reached the high threshold).

>>>> What was the version in use before this happened? 3.1.8 okay for a while? or did it start discarding right at the point of upgrade from another?
We started testing with 3.1.6 and then used 3.1.8 in production. This
issue was noticed even during the QA. We didn't have any caching
servers before.

>>>> Server advertised the content-length as unknown then sent 279307 bytes. (-1/279307) Squid is forced to store it to disk immediately (could be a TB
>>>> about to arrive for all Squid knows).
I looked further into the logs and the log entry I pointed out was
from the SIBLING request. sorry about that.

>>>> These tell squid 50% of the cache allocated disk space MUST be empty at all times. Erase content if more is used. The defaults for these are less
>>>> than 100% in order to leave some small buffer of space for use by line-speed stuff still arriving while squid purged old objects to fit them.
Since our data changes every day, I don't need a cache dir with more
than 11GB to give enough buffer. On an average, 6GB of disk cache is
used per day.

>>>> filesystem is resizerfs with RAID-0. only 11GB used for the cache.
>>>> Used or available?
11GB used out of 20GB.

>>>> The 10MB/GB of RAM usage by the in-memory index is calculated from an average object size around 4KB. You can check your available RAM roughly
>>>> meets Squid needs with: 10MB/GB of disk cache + the size of cache_mem + 10MB/GB of cache_mem + about 256 KB per number of concurrent clients at
>>>> peak traffic. This will give you a rough ceiling.

Yesterday morning, we changed the cache_replacement_policy from "heap
LFUDA" to "heap GDSF", cleaned up the cache_dir and started squid
fresh.

current disk cache usage is 8GB (out of 20GB). ie. after 30 hours.
Free memory is 1.7GB out of 15GB.

Based on your math, the memory usage shouldn't be more than 3 or 4GB.
In this case, the used mem is far too high.

On Thu, Sep 23, 2010 at 12:21 AM, Amos Jeffries <squid3_at_treenet.co.nz> wrote:
> On Wed, 22 Sep 2010 15:09:31 -0400, "Chad Naugle"
> <Chad.Naugle_at_travimp.com>
> wrote:
>> With that large array of RAM I would increase those maximum numbers, to
>> let's say, 8 MB, 16 MB, 32 MB, especially if you plan on using heap
> LFUDA,
>> which is optimized for storing larger objects, and trashes smaller
> objects
>> faster, where heap GSDF is the opposite, using LRU for memory for the
> large
>> objects to offset the difference.
>>
>> ---------------------------------------------
>> Chad E. Naugle
>> Tech Support II, x. 7981
>> Travel Impressions, Ltd.
>>
>>
>>
>>>>> Rajkumar Seenivasan <rkcp613_at_gmail.com> 9/22/2010 3:01 PM >>>
>> Thanks for the tip. I will try with "heap GSDF" to see if it makes a
>> difference.
>> Any idea why the object is not considered as a hot-object and stored in
>> memory?
>
> see below.
>
>>
>> I have...
>> minimum_object_size 0 bytes
>> maximum_object_size 5120 KB
>>
>> maximum_object_size_in_memory 1024 KB
>>
>> Earlier we had cache_swap_low and high at 80 and 85% and the physical
>> memory usage went high leaving only 50MB free out of 15GB.
>> To fix this issue, the high and low were set to 50 and 55%.
>
> ? 50% empty cache required so as not to fill RAM? => cache is too big or
> RAM not enough.
>
>>
>> Does this change in "cache_replacement_policy" and the "cache_swap_low
>> / high" require a restart or just a -k reconfigure will do it?
>>
>> Current usage: Top
>> top - 14:33:39 up 12 days, 21:44,  3 users,  load average: 0.03, 0.03,
> 0.00
>> Tasks:  83 total,   1 running,  81 sleeping,   1 stopped,   0 zombie
>> Cpu(s):  0.0%us,  0.1%sy,  0.0%ni, 99.3%id,  0.0%wa,  0.0%hi,  0.0%si,
>> 0.6%st
>> Mem:  15736360k total, 14175056k used,  1561304k free,   283140k buffers
>> Swap: 25703960k total,       92k used, 25703868k free, 10692796k cached
>>
>>   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>> 17442 squid     15   0 1821m 1.8g  14m S  0.3 11.7   4:03.23 squid
>>
>>
>> #free
>>              total       used       free     shared    buffers
> cached
>> Mem:      15736360   14175164    1561196          0     283160
> 10692864
>> -/+ buffers/cache:    3199140   12537220
>> Swap:     25703960         92   25703868
>>
>>
>> Thanks.
>>
>>
>> On Wed, Sep 22, 2010 at 2:16 PM, Chad Naugle <Chad.Naugle_at_travimp.com>
>> wrote:
>>> Perhaps you can try switching to heap GSDF, instead of heap LFUDA.
> What
>>> are also your minimum_object_size versus your _maximum_object_size?
>>>
>>> Perhaps you can also try setting the cache_swap_low / high back to
>>> default (90 - 95) to see if that will make a difference.
>>>
>>> ---------------------------------------------
>>> Chad E. Naugle
>>> Tech Support II, x. 7981
>>> Travel Impressions, Ltd.
>>>
>>>
>>>
>>>>>> Rajkumar Seenivasan <rkcp613_at_gmail.com> 9/22/2010 2:05 PM >>>
>>> I have the following for replacement policy...
>>>
>>> cache_replacement_policy heap LFUDA
>>> memory_replacement_policy lru
>>>
>>> thanks.
>>>
>>> On Wed, Sep 22, 2010 at 2:00 PM, Chad Naugle <Chad.Naugle_at_travimp.com>
>>> wrote:
>>>> What is your cache_replacement_policy directive set to?
>>>>
>>>> ---------------------------------------------
>>>> Chad E. Naugle
>>>> Tech Support II, x. 7981
>>>> Travel Impressions, Ltd.
>>>>
>>>>
>>>>
>>>>>>> Rajkumar Seenivasan <rkcp613_at_gmail.com> 9/22/2010 1:55 PM >>>
>>>> I have a strange issue happening with my squid (v 3.1.8)
>>>> 2 squid servers with sibling - sibling setup in accel mode.
>
> What was the version in use before this happened? 3.1.8 okay for a while?
> or did it start discarding right at the point of upgrade from another?
>
>>>>
>>>> after running the squid for 2 to 3 days, the HIT rate has gone down.
>>>> from 50% HIT to 34% for TCP and from 34% HIT to 12% for UDP.
>>>>
>>>> store.log shows that even fresh requests are NOT getting stored onto
>>>> disk and getting RELEASED rightaway.
>>>> This issue is with both squids...
>>>>
>>>> store.log entry:
>>>> 1285176036.341 RELEASE -1 FFFFFFFF 7801460962DF9DCA15DE95562D3997CB
>>>> 200 1285158415        -1 1285230415 application/x-download -1/279307
>>>> GET http://....
>>>> requests have a max-age of 20Hrs.
>
> Server advertised the content-length as unknown then sent 279307 bytes.
> (-1/279307) Squid is forced to store it to disk immediately (could be a TB
> about to arrive for all Squid knows).
>
>>>>
>>>> squid.conf:
>>>> cache_dir aufs /squid/var/cache 20480 16 256
>>>> cache_mem 1536 MB
>>>> memory_pools off
>>>> cache_swap_low 50
>>>> cache_swap_high 55
>
> These tell squid 50% of the cache allocated disk space MUST be empty at
> all times. Erase content if more is used. The defaults for these are less
> than 100% in order to leave some small buffer of space for use by
> line-speed stuff still arriving while squid purged old objects to fit them.
>
> The 90%/95% numbers were created back when large HDD were measured MB.
>
> 50%/55% with 20GB cache only makes sense if you have something greater
> than 250Mbps of new cachable HTTP data flowing through this one Squid
> instance. In which case I'd suggest a bigger cache.
>
> (My estimate of the bandwidth is calculated from: % of cache needed free /
> 5 minute interval lag in purging.)
>
>
>>>> refresh_pattern . 0 20% 1440
>>>>
>>>>
>>>> filesystem is resizerfs with RAID-0. only 11GB used for the cache.
>
> Used or available?
>
> cache_dir...20480 = 20GB allocated for the cache.
>
> With 11GB is roughly 50% (cache_swap_low) of the 20GB. So that seems to be
> working.
>
>
> The 10MB/GB of RAM usage by the in-memory index is calculated from an
> average object size around 4KB. You can check your available RAM roughly
> meets Squid needs with:  10MB/GB of disk cache + the size of cache_mem +
> 10MB/GB of cache_mem + about 256 KB per number of concurrent clients at
> peak traffic. This will give you a rough ceiling.
>
> Amos
>
Received on Fri Sep 24 2010 - 17:16:02 MDT

This archive was generated by hypermail 2.2.0 : Sat Sep 25 2010 - 12:00:03 MDT