Re: Memory usage fix (patch)

From: Henrik Nordstrom <hno@dont-contact.us>
Date: Thu, 17 Mar 2005 02:11:38 +0100 (CET)

On Thu, 17 Mar 2005, Steven Wilton wrote:

> The problem is that uncacheable objects (ie size > maximum_object_size,
> or download managers doing multiple partial requests on large files) are
> always held in memory. Squid does free this memory as the data is sent
> to the client, but it doesn't look like there's a backoff mechanism when
> the data is arriving at a much faster rate than it is being sent to the
> client.

Normally this is dealt with by the fwdCheckDefer function. Maybe your
epoll implementation does not use the filedescriptors defer function to
back off when needed?

However, there is a race in the logics used by this function if there is
multiple clients accessing the same cacheable object and the initial
client disconnects. If this happens then the cap of the connection ban be
lost in certain situations. But memory should still be freed.

Other issues is that memory is only freed on swapout, which results in
that for async cache_dirs many cached object will end up using more memory
than intended.

I have a partial fix for this in my lfs branch, and quite a bit of fixes
in how the cache vm is managed. However, I now discovered a race window
similar to the above which would still make Squid loose the bandwidth cap
while the object is cached. Fixing this should not be too hard.

The design of this area in the code has quite a bit of potential for
improvement...

Regards
Henrik
Received on Wed Mar 16 2005 - 18:11:43 MST

This archive was generated by hypermail pre-2.1.9 : Fri Apr 01 2005 - 12:00:04 MST