RE: squid 2.0 on linux 2.1.* issue

From: Jordan Mendelson <jordy@dont-contact.us>
Date: Thu, 15 Oct 1998 23:15:58 -0400

This definitely sounds what is happening to me and a few hotmail users we
have that I reported about ealier. Any time a large POST request is split
into multiple packets, the connection is dropped (but the POST request
succeeds however) and Netscape will report an error about the connection
being lost.

I noticed that part of the return of the POST request did come back to the
browser.

Anyhow, to fix this, #define LINGERING_CLOSE. I believe that should fix it.
It basically does a read() to EOF before close()'ing the socket (if I'm
reading comm.c correctly that is :).

Jordan

--
Jordan Mendelson     : http://jordy.wserv.com
Web Services, Inc.   : http://www.wserv.com
> -----Original Message-----
> From: Paul Phillips [mailto:paulp@go2net.com]
> Sent: Thursday, October 15, 1998 8:14 PM
> To: Squid
> Subject: squid 2.0 on linux 2.1.* issue
>
>
> We recently started seeing connections to squid dropped on certain
> clients doing POST requests.  This was tracked down by one of our
> developers as the following:
>
> On 15 Oct 1998, Tom May wrote:
>
> > It is a squid bug.  Or linux is being overly pedantic.  My browser is
> > setting content length to 1980 bytes, but writing 1982 bytes.  It
> > probably adds crlf to the end.  Squid is only reading 1980 bytes.
> > Since 2 bytes are left unread on the connection, linux sends a reset
> > when squid closes the socket to indicate to the browser that not all
> > the data was read by squid.  This behaviour is new.  From tcp_close()
> > in /usr/src/linux/net/ipv4/tcp.c:
> >
> >         /* As outlined in draft-ietf-tcpimpl-prob-03.txt, section
> >          * 3.10, we send a RST here because data was lost.  To
> >          * witness the awful effects of the old behavior of always
> >          * doing a FIN, run an older 2.1.x kernel or 2.0.x, start
> >          * a bulk GET in an FTP client, suspend the process, wait
> >          * for the client to advertise a zero window, then kill -9
> >          * the FTP client, wheee...  Note: timeout is always zero
> >          * in such a case.
> >          */
>
> It seems only to happen when the request body is broken out across
> multiple packets.
>
> Read a couple extra bytes in read_post_request, or does a more
> Right Thing come to mind?
>
Received on Thu Oct 15 1998 - 21:02:18 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:42:31 MST