Re: [squid-users] ./squid -z questions

From: Hendrik Voigtländer <hendrik@dont-contact.us>
Date: Tue, 29 Jun 2004 20:08:39 +0200

Matus UHLAR - fantomas wrote:
> On 28.06 22:54, Hendrik Voigtländer wrote:
>
>>This is what I would do:
>>- deaktivate all cache_dir which are close to 100%
>>- deaktivate all cache_dirs on one disk
>>- repartition it with a single partition & mkreiserfs
>>- put one cache_dir on it
>>- repeat with the other disks
>>- if you are on a big uplink hit ratios shouldnt suffer to much, take
>>your time and let squid refill the cache_dirs.
>>- be careful with the cache_dir size. with your disks not more than 50%
>>and you don't need to keep more than a week of traffic.
>
>
> well, I would use more than 50%, maybe 70%, or better, ~80% of free space
> on created filesystem, where the overhead is already substracted
>
Hi,

In general I agree on a maximum possible usage of ~80%, but if I read
the mails correctly she has something about 2GB of RAM but 5 x 73 GB
Harddisk for the cache_dirs only. This is about 300GB of storage, what
about the RAM requirements when using ~80% ?

http://www.mail-archive.com/squid-users@squid-cache.org/msg18052.html

IMHO there is not enough RAM for all disks in the machine. As having
more disks gives more speed I think reducing the maximum usage is a
better option than reducing the number of disks.

>
> I would check and probably increase maximum_object_size up to ~32MB
> and use cache_replacement_policy heap LFUDA
>
Good idea for such a large storage. Any idea how this will influence the
RAM requirements?

Regards, Hendrik
Received on Tue Jun 29 2004 - 12:12:49 MDT

This archive was generated by hypermail pre-2.1.9 : Thu Jul 01 2004 - 12:00:03 MDT