Re: [squid-users] issue with one of joe coopers modifcations

From: Joe Cooper <>
Date: Fri, 23 Mar 2001 01:23:00 -0600

Hi Greg,

Using my instructions has nothing to do with having too little memory in
your machine to handle a cache that size. 73 L1 directories??? Are you
really using a cache_dir that large?

You are simply filling up your RAM with an in-core index of the cache
contents. This is normal behavior--Squid keeps a record of every object
in the store in memory. If your store is gigantic (as yours clearly
is), and your memory is not gigantic to match, you will run out of
memory. There is no leak, and there is no flaw in the malloc used by
default in Red Hat 6.2.

Lower your cache_dirs to something sensible (1GB for every 10MB of RAM
is a safe number for a standard Squid compile--a little more RAM is
needed for an async i/o compile). This too, is covered in my
documentation for tuning Squid on Linux.

Hope this helps.

Greg wrote:

> Hello.
> I changed the default first level cache directory from default 14 and
> used his formula, i got 73, anyway getting to the point, basically what
> happens after about 3 to 4 weeks it uses up all the memory and hits swap
> space, i have tried, rebooting, no difference, and using the kill
> command, (had httpd and cron running, thought they were bad, so i killed
> them) its still made no difference, so i am now using the alernative
> malloc (configure --enable-dlmalloc)
> and seeing if there is any other difference, the only thing i can't do
> is make a custom kernel (i think there are compiler problems in my
> version of redhat 6.2)
> thanks
> Greg
                      Joe Cooper <>
                  Affordable Web Caching Proxy Appliances
Received on Fri Mar 23 2001 - 00:14:27 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:58:48 MST