Re: [squid-users] Antwort: Re: [squid-users] Memory and CPU usage squid-3.1.4

From: Marcus Kool <marcus.kool_at_urlfilterdb.com>
Date: Thu, 17 Jun 2010 11:15:09 -0300

Martin,

Valgrind is a memory leak detection tool.
You need some developer skills to run it.

If you have a test environment with low load you may want
to give it a try.
- download the squid sources
- run configure with CFLAGS="-g -O2"
- run squid with valgrind
- wait
- kill squid with a TERM signal and look and the valgrind log file

Valgrind uses a lot of memory for its own administration and has
a lot of CPU overhead, so reduce cache_mem to a small value like 32MB.

Most likely you will see many memory leaks because
Squid does not free everything when it exits. This is normal.
You need to look at repeated memory leaks; the leaks that
occur often and file a bug report.

Please do not post the whole valgrind output to this list.

Marcus

Martin.Pichlmaier_at_continental-corporation.com wrote:
> Hello,
>
> I just wanted to report back the last tests:
>
> After the memory cache is filled to 100% the squid (3.1.4 or 3.1.3)
> still needs more memory over time when under load, about 1-2 GB a day.
> memory_pool off did not change anything, the process size still rises.
> The high CPU usage seem to start when rising over a certain size limit
> but I am not sure about that.
> Example memory consuption of squid-3.1.4:
> from 8.0 GB (4pm) to 8.4 GB (7pm) to 8.5 GB (4am next day) to 9,4 GB
> (2pm).
> At night there is low load on the squid, maybe 20-50 req/s.
> 3.1.3 behaves the same, so it does not seem to be related to the
> "ctx enter/exit level" topic discussed in the last mails.
>
> I am now reverting the proxies back to 3.0.STABLE25 but will keep one
> proxy
> on 3.1.4 for testing.
> Probably something in my setup causes squid to consume too much memory.
>
> Amos, do you have another idea what might causes this and where to look,
> for example which debug depth? I can do some tests and have the
> possibility
> to slowly put this proxy under load and take it out of productive
> environment afterwards...
>
> Regards,
> Martin
>
>
>
> Amos Jeffries <squid3_at_treenet.co.nz> wrote on 15.06.2010 13:31:40:
>
> <snip>
>>> I am now checking with mem pools off on one of the proxies and report
>>> later whether it changes anything.
> <snip>
>>> 2010/06/15 11:37:06| ctx: enter level 2059: '<lost>'
>>> 2010/06/15 11:37:06| ctx: enter level 2060: '<lost>'
>>> X:}0/06/15 11:37:06| WARNING: suspicious CR characters in HTTP header
> {
>>> 2010/06/15 11:37:06| ctx: exit level 2060
>>> 2010/06/15 11:37:06| ctx: enter level 2060: '<lost>'
>>> X:}0/06/15 11:37:06| WARNING: suspicious CR characters in HTTP header
> {
>>> 2010/06/15 11:37:06| ctx: exit level 2060
>>> 2010/06/15 11:37:06| ctx: enter level 2060: '<lost>'
>>> X:}0/06/15 11:37:06| WARNING: suspicious CR characters in HTTP header
> {
>>> 2010/06/15 11:37:06| ctx: exit level 2060
>>> 2010/06/15 11:37:06| ctx: enter level 2060: '<lost>'
>>> X:}0/06/15 11:37:06| WARNING: suspicious CR characters in HTTP header
> {
>>> 2010/06/15 11:37:06| ctx: exit level 2060
>>> 2010/06/15 11:37:06| ctx: enter level 2060: '<lost>'
>>> X:}0/06/15 11:37:06| WARNING: suspicious CR characters in HTTP header
> {
>>> 2010/06/15 11:40:56| ctx: exit level 2060
>>> 2010/06/15 11:40:56| ctx: enter level 2060: '<lost>'
>>> 2010/06/15 11:40:56| ctx: enter level 2061: '<lost>'
>>>
>> Ouch. We've been wondering about these ctx loops. It is not something to
>
>> be terribly worried about, but can cause some "weird stuff" (yes that is
>
>> probably the best explanation).
>>
>> Thanks to your reminder, I've just had another look and found one more
>> in header processing. Hopefully that was it.
>>
>> Amos
>> --
>> Please be using
>> Current Stable Squid 2.7.STABLE9 or 3.1.4
>
>
>
Received on Thu Jun 17 2010 - 14:15:15 MDT

This archive was generated by hypermail 2.2.0 : Fri Jun 18 2010 - 12:00:03 MDT