Re: [squid-users] Squid 3.3 is very aggressive with memory

From: Eliezer Croitoru <eliezer_at_ngtech.co.il>
Date: Wed, 18 Dec 2013 02:38:48 +0200

OK Nathan,

The next steps are squid.conf..
Which can clarify couple things.
Also you do have the cache-mgr interface in http and in it you have
statistics.
http://proxy_ip:3128/squid-internal-mgr/info
(The above example).
It will provide much more data then just like that looking at memory usage.
Also please provide "free -m" output.

Thanks,
Eliezer

On 17/12/13 07:24, Nathan Hoad wrote:
> Okay, to follow up. I still cannot reproduce this in a lab
> environment, but I have implemented a way of doing what Alex described
> on the production machine. I run two instances of Squid with the same
> config and switch the transparent proxy out by changing the redirect
> rules in iptables. The second instance is running without a cache_dir
> though, to prevent the possibility of two instances sharing the same
> directory and running amok. If requested, I can create a second
> cache_dir for the second instance to mimic the config entirely.
>
> While running under this configuration, I've confirmed that memory
> usage does go up when active, and stays at that level when inactive,
> allowing some time for timeouts and whatnot. I'm currently switching
> between the two instances every fifteen minutes.
>
> Here is a link to the memory graph for the entire running time of the
> second process, at 1 minute intervals:
> http://getoffmalawn.com/static/mem-graph.png. The graph shows memory
> use steadily increasing during activity, but remaining reasonably
> stable during inactivity.
>
> Where shall we go from here? Given that I can switch between the
> instances, impacting performance on the production box is not of huge
> concern now, so I can run the second instance under Valgrind, or bump
> up the debug logging, or whatever would be helpful.
>
> As an aside, I've been reading some of the code pointed at by traces
> I've got, and I've stumbled upon the fact that nearly every caller of
> StoreEntry::replaceHttpReply will leak HttpReply objects if the
> internal mem_obj pointer of a StoreEntry is set to NULL. There's a
> critical log message that occurs in this situation which I have not
> seen, so I can conclude that this is not the issue I am seeing, but
> it's an issue nonetheless. If there's interest, I'll submit a patch
> for this issue.
>
> Many thanks,
>
> Nathan.
> --
> Nathan Hoad
> Software Developer
> www.getoffmalawn.com
>
>
> On Sat, Dec 14, 2013 at 8:11 PM, Nathan Hoad <nathan_at_getoffmalawn.com> wrote:
>> On Fri, Dec 13, 2013 at 10:33 PM, Eliezer Croitoru <eliezer_at_ngtech.co.il> wrote:
>>> Hey Nathan,
>>>
>>> I am looking for more details on the subject in hand in the shape of:
>>> Networking Hardware
>>
>> Straight out of lspci:
>>
>> 02:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5722
>> Gigabit Ethernet PCI Express
>> 03:01.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5703
>> Gigabit Ethernet (rev 10)
>>
>> Two network cards - one for internal traffic, the other for external.
>>
>>> Testing Methods
>>
>> - a mixture of direct and intercepted HTTP and HTTPS traffic, hitting
>> the configured ICAP server and not.
>> - both valid and invalid upstream SSL certificates, hundreds of
>> concurrent requests from a single client
>> - thrashing Squid with thousands of connections that are aborted
>> after 800ms, running for ~30-40 seconds at a time.
>> - currently I'm putting the week's access.log through Squid to see if
>> that triggers it, for a poor approximation of the traffic.
>>
>>> Is it a SMP squid setup?
>> - both SMP (2 workers) and non-SMP.
>>
>>> In the case you use a 32bit system which is limited to how much ram??(I
>>> remember something about a windows nt with 64GB).
>>
>> - This particular host has 3gb of RAM. Previously running a non-SMP
>> Squid 3.2.13 instance and according to logs, maxed out at ~500mb of
>> resident after running for hours or days at a time, with a 220mb
>> cache_mem. Now, however the memory usage grows to 900mb in ~40
>> minutes, and typically reaches 1.5gb in ~4 hours. We have a ulimit in
>> place to kill it once it hits 1.5gb, but prior to putting that in
>> place it typically reached 2gb.
>>
>>>
>>> If you can provide more details I will be happy to try and test it.
>>>
>>> Thanks,
>>> Eliezer
>>
>> If there's any other information you think may be useful, feel free to ask.
Received on Wed Dec 18 2013 - 00:44:03 MST

This archive was generated by hypermail 2.2.0 : Sun Dec 22 2013 - 12:00:04 MST