Re: Squid slow during cleanup

From: Duane Wessels <wessels>
Date: Wed, 11 Sep 96 12:18:09 -0700

gleeson@unimelb.edu.au writes:

>Hi,
> We've recently been encountering problems with squid when it goes
>into cleanup mode after passing the cache_swap_high limit. Performance
>slows considerably: 10-20 times slower than normal. Examination of cache.log
>shows that it is attempting to cleanup the swap area: regular entries of
>[..] store_clean.c:98: Cleaned 10 unused files from /servers/http/cache/xx
>
>When it gets into this state, the cache swap area grows faster than squid
>can clean it up, and the only solution we've been able to try is to delete
>the entire cache and start again from scratch. This involves changing
>squid.conf to go into 'proxy-only' mode while the files in the cache_dir
>area are deleted, and then changing it back. Funnily enough, doing this
>seems to have no real impact on the hit rate from the cache (calculated
>over a week).
>
>The performance degradation is a major issue, with over 5,000 local users
>generating almost 2 million requests a week, as well as 6 squid neighbours
>generating another 4+ million ICP and TCP requests. Users report that the
>proxy performance is almost unbearable when it is in this state.
>
>Has anyone else encountered such a problem and have any suggestions?
>
>For reference, our server is a AlphaServer 8200, with 512Mb RAM, 2Gb swap
>and the following squid.conf settings:
>cache_mem 256
>cache_swap 15000
>cache_swap_low 75
>cache_swap_high 90
>cache_mem_low 75
>cache_mem_high 95

First to clarify that the 'Cleaned 10 unused files...' debugging is not
really related to this mode where the cache is purging files because it
is over the cache_swap_low/high marks.

The store_clean routines purpose is to reclaim disk space from orphaned
swap files. The most common way that swap files become orphaned is
during the cache restart. Instead of unlinking the expired files, we
just skip over them.

As other's have suggested, I would make your cache_swap_low and
cache_swap_high values closer together, like 90-95 or 85-90. That
should reduce the total time spent in the reclaimation mode.

Thirdly, maybe use the ttl_pattern rules to be more selective about
what you keep? Maybe always throw out .html and keep everything else?
Maybe throw out .tar.gz if you don't really expect to get good hits
from them? Similarly, maybe reduce your maximum object sizes. We have
found that when there is a shortage of disk space, a lower max object
size gives better hit rates (by byte volume).

Duane W.
Received on Wed Sep 11 1996 - 12:18:09 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:32:59 MST