Re: Updated: pipelined/halfclosed connections

From: Adrian Chadd <adrian@dont-contact.us>
Date: Tue, 24 Feb 2004 19:42:12 -0700

On Tue, Feb 24, 2004, Henrik Nordstrom wrote:
> On Tue, 24 Feb 2004, Adrian Chadd wrote:
>
> > Here's my latest patch. I've broken out the parsing/request initialisation
> > code from commReadRequest() into a seperate function which I can then
> > call from keepaliveNextRequest(). Please review and comment.
> > I've tested it locally and it seems to work just fine.
>
> Looks good, except that I would prefer to have the do_next_read thing
> eleminated. Either by moving "all" related logics down to
> clientParseRequest or by moving it completely out and returning different
> status depending the result (parsed, needs more data, failed).

*nod*

It doesn't look like a "trivial" fix. Would you mind if I committed
the current work, sans re-working the do_next_read flag, so it gets
some testing? I'm trying to get squid-3 stable before I jump in
to try and improve someo f the code.

> > http://cyllene.uwa.edu.au/~adrian/bt .. have a look. That particular
> > node had about 37,000 entries.. Squid took up a good 1.2gig of RAM.
> > It _did_ recover after about 15 minutes but the memory was so
> > fragmented I needed to restart to get any decent performance..
>
> Ugh.. is that a back trace? any backtrace beyond depth 20 in Squid is
> a defenite sign of broken design..

Heh. Yup.

> Unfortunately I am not yet familiar with how mem_node operates, why such
> massive buildup of entries can happen or why it is recursing in your
> trace. Robert?

I wouldn't blame Robert just yet.. I haven't changed any of the code
relating to this, there may be some boundary case which hasn't been
thought of.. I _was_ mirroring a local FTP server which far, far too
many ISO images on it and I did break the process half way through.

Adrian
Received on Tue Feb 24 2004 - 19:42:13 MST

This archive was generated by hypermail pre-2.1.9 : Mon Mar 01 2004 - 12:00:04 MST