Re: [squid-users] "Quadruple" memory usage with squid

From: Amos Jeffries <squid3_at_treenet.co.nz>
Date: Sat, 12 Dec 2009 14:38:52 +1300

Marcus Kool wrote:
> time passed and an other thread states that FreeBSD developers
> confirm a known issue with superpages and suggest vfork().
>
> vfork() halts the parent while the child does not do its 26
> calls to exec().
> This time may be short enough for the parent to have a workaround
> for the problem. Of course I am interested to see performance
> numbers.

I spent a while yesterday testing to see what was needed to use vfork().
The point of using fork() seems to have been to setup pipes from some
FDs held by the main Squid to stdin/out/err pipes on the child process
so the execvp() can just use the std* pipes. This needs to be re-plumbed
when using vfork() since the child cannot setup the pipes.

>
> But forgive me for asking why option 5 is not considered.
> All new information indicates that both Squid processes
> will fork fast. Variations of options 5 may even give better
> results; e.g.
> - memfs,
> - change size of mem_node to 4096 bytes (is it safe?)

Seems to be safe enough to use. Linda is apparently using it now.

Though there is a potential loss of disk IO efficiency when alignments
of the data portion are not on the 4096/8192 disk chunk borders. The few
dozen bytes of overhead difference in the disk and memory nodes are
annoying.

There are ways around that, but they need some work to get right.

> - use alternative malloc implementation like TCMalloc which only
> aligns chunks bigger than 32K on a page boundary
>
> The size of the mem_node objects (4112 bytes) is definitely inefficient
> and wastes too much memory to leave it unchanged. A memory allocater
> that does not page-align these objects is the simplest rescue until
> the Squid developers come with a solution.
>
> Marcus
>
>
> Linda Messerschmidt wrote:
>> On Wed, Nov 25, 2009 at 11:18 AM, Marcus Kool
>> <marcus.kool_at_urlfilterdb.com> wrote:
>>> The FreeBSD list may have an explanation why there are
>>> superpage demotions before we expect them (when their are no forks
>>> and no big demands for memory).
>>
>> I think they are simply free()s since the squid was holding only 5mb
>> of unused memory at any time.
>>
>>> option 5. (multi-CPU systems only).
>>> use 2 instances of Squid:
>>> 1. with null cache, small cache (e.g. 100 MB cache_mem),
>>> 16 URL rewriters and a Squid parent
>>> 2. a Squid parent with null cache and HUGE cache_mem
>>>
>>> Both Squid processes will rotate/restart fast.
>>
>> I think our "option 5" would be the 20GB memfs cache_dir solution, as
>> that also hacks around the "double allocation" issue.
>>
>> But one way or the other there is some kind of bug here... squid
>> claims it is using X memory and it is really using 2X. Even if it is
>> only a display error and it really is using the memory, I would like
>> to know for certain the origin so I can move on knowing I tried my
>> best. :-)
>>
>> Thanks!
>>
>>

Amos

-- 
Please be using
   Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20
   Current Beta Squid 3.1.0.15
Received on Sat Dec 12 2009 - 01:38:59 MST

This archive was generated by hypermail 2.2.0 : Sat Dec 12 2009 - 12:00:01 MST