Re: [squid-users] "Quadruple" memory usage with squid

From: Linda Messerschmidt <linda.messerschmidt_at_gmail.com>
Date: Mon, 23 Nov 2009 21:40:35 -0500

On Mon, Nov 23, 2009 at 8:06 PM, Amos Jeffries <squid3_at_treenet.co.nz> wrote:
> Accepting the challenge and going of the side tangent ;) ....
>
> redirect helpers can be reduced or removed in a lot of cases by using:

Our redirectors aren't the problem. They work fine for 23:55 every
day; over 10 billion served. :-)

It's solely the overhead of fork()ing 16 copies of the squid
executable and its mammoth address space to start new ones when the
logs are rotated.

The redirectors do a lot of dynamic heavy lifting, and there's simply
no way to replace them with a static configuration.

I watched this happen today, because I noticed we accidentally had the
24GB machine with cache_mem set to 12GB. I tried to cut it back to
6GB with a squid -k reconfig. At the time, the squid process was
right at 24GB VSZ, with about 15GB RSS. It started spawning child
processes very, very slowly, but it was the *parent* process that went
insane with CPU usage... it pegged its CPU at 100% and the load
average on the machine went to about 6. The spawned child processes
had the same VSZ but a tiny little RSS, about like I guess one would
expect.

I don't know if it's just in the kernel copying the processes' page
tables for the next fork(), or if there's something else going on
during reconfig that would cause that kind of CPU usage. But it seems
to stall out everything.

> Maybe. We would like to diagnose this problem and fix it properly, but if
> its too much hassle you can go that way.

It would definitely be my preference to diagnose and fix the problem
and I can live with a fair amount of hassle to get there. (Unless you
are saying "you are using redirectors" is the problem, in which case
memfs it is. ;-) )

Thanks!
Received on Tue Nov 24 2009 - 02:40:39 MST

This archive was generated by hypermail 2.2.0 : Tue Nov 24 2009 - 12:00:04 MST