Re: [squid-users] "Quadruple" memory usage with squid

From: Linda Messerschmidt <linda.messerschmidt_at_gmail.com>
Date: Wed, 25 Nov 2009 11:41:08 -0500

On Wed, Nov 25, 2009 at 11:18 AM, Marcus Kool
<marcus.kool_at_urlfilterdb.com> wrote:
> The FreeBSD list may have an explanation why there are
> superpage demotions before we expect them (when their are no forks
> and no big demands for memory).

I think they are simply free()s since the squid was holding only 5mb
of unused memory at any time.

> option 5.  (multi-CPU systems only).
> use 2 instances of Squid:
> 1. with null cache, small cache (e.g. 100 MB cache_mem),
>   16 URL rewriters and a Squid parent
> 2. a Squid parent with null cache and HUGE cache_mem
>
> Both Squid processes will rotate/restart fast.

I think our "option 5" would be the 20GB memfs cache_dir solution, as
that also hacks around the "double allocation" issue.

But one way or the other there is some kind of bug here... squid
claims it is using X memory and it is really using 2X. Even if it is
only a display error and it really is using the memory, I would like
to know for certain the origin so I can move on knowing I tried my
best. :-)

Thanks!
Received on Wed Nov 25 2009 - 16:41:16 MST

This archive was generated by hypermail 2.2.0 : Wed Nov 25 2009 - 12:00:06 MST