Re: Updated: pipelined/halfclosed connections

From: Henrik Nordstrom <hno@dont-contact.us>
Date: Tue, 24 Feb 2004 11:50:46 +0100 (CET)

On Tue, 24 Feb 2004, Adrian Chadd wrote:

> Here's my latest patch. I've broken out the parsing/request initialisation
> code from commReadRequest() into a seperate function which I can then
> call from keepaliveNextRequest(). Please review and comment.
> I've tested it locally and it seems to work just fine.

Looks good, except that I would prefer to have the do_next_read thing
eleminated. Either by moving "all" related logics down to
clientParseRequest or by moving it completely out and returning different
status depending the result (parsed, needs more data, failed).

I can not make up my mind on which is best, and there is some slight
implications if/when comm I/O buffer management is refined. But for now
either approach is acceptable and refining later is trivial.

> There is, however, a fun massive memory abuse I'm seeing when I'm
> stressing out squid with mass recursive FTP transfers, along with
> loads and loads of ISOs.
>
> http://cyllene.uwa.edu.au/~adrian/bt .. have a look. That particular
> node had about 37,000 entries.. Squid took up a good 1.2gig of RAM.
> It _did_ recover after about 15 minutes but the memory was so
> fragmented I needed to restart to get any decent performance..

Ugh.. is that a back trace? any backtrace beyond depth 20 in Squid is
a defenite sign of broken design..

Unfortunately I am not yet familiar with how mem_node operates, why such
massive buildup of entries can happen or why it is recursing in your
trace. Robert?

Regards
Henrik
Received on Tue Feb 24 2004 - 03:50:50 MST

This archive was generated by hypermail pre-2.1.9 : Mon Mar 01 2004 - 12:00:04 MST