Re: Memory usage fix (patch)

From: Adrian Chadd <adrian@dont-contact.us>
Date: Thu, 17 Mar 2005 01:21:18 +0000

On Thu, Mar 17, 2005, Henrik Nordstrom wrote:
> On Thu, 17 Mar 2005, Steven Wilton wrote:
>
> >The problem is that uncacheable objects (ie size > maximum_object_size,
> >or download managers doing multiple partial requests on large files) are
> >always held in memory. Squid does free this memory as the data is sent
> >to the client, but it doesn't look like there's a backoff mechanism when
> >the data is arriving at a much faster rate than it is being sent to the
> >client.
>
> Normally this is dealt with by the fwdCheckDefer function. Maybe your
> epoll implementation does not use the filedescriptors defer function to
> back off when needed?

Just as a FWIW, this is something thats been discussed to death a couple
of years ago. Yup, event driven IO doesn't play well with the current
defer processing of IO.

> Other issues is that memory is only freed on swapout, which results in
> that for async cache_dirs many cached object will end up using more memory
> than intended.
>
> I have a partial fix for this in my lfs branch, and quite a bit of fixes
> in how the cache vm is managed. However, I now discovered a race window
> similar to the above which would still make Squid loose the bandwidth cap
> while the object is cached. Fixing this should not be too hard.

*grin*

> The design of this area in the code has quite a bit of potential for
> improvement...

it would. I'd like to finally get back into this now - doubly so if there's
someone else here who is interested in fixing the network IO code path
in squid-2.5.

adrian

-- 
Adrian Chadd			"To believe with certainty we must first
<adrian@creative.net.au>	    begin by doubting."
			
Received on Wed Mar 16 2005 - 18:21:23 MST

This archive was generated by hypermail pre-2.1.9 : Fri Apr 01 2005 - 12:00:04 MST