Re: squid seg faults on bogusly large URLs

From: James R Grinter <jrg@dont-contact.us>
Date: Thu, 29 Aug 1996 09:59:28 +0000

On Wed 28 Aug, 1996, Andreas Strotmann <Strotmann@rrz.uni-koeln.de> wrote:
>Not unless you do that in a Netscape auto-config file. The problem is in
>the parsing of the URL before any decision about caching is made (well,
>actually within the code that writes the error message!).

I'll just add to this - we observed overly long URLs corrupting
memory and changing the name of the cache/log file: someone else
on this list had spotted that some time ago, and I meant to say
that we tracked it down within the log files to the size of the
request.

Whilst we're discussing request types, at some point in the
Squid betas support for HTTP/0.9 requests got dropped/broken.
What do people think about making some additions to better
support them (suppressing mime headers in the response, etc, too).
The 1.0 spec says that we should support them.

(the code in question is partly at icp.c:parseHttpRequest

    token = strtok(NULL, "");
    for (t = token; t && *t && *t != '\n' && *t != '\r'; t++);
    if (t == NULL || *t == '\0' || t == token) {
        debug(12, 3, "parseHttpRequest: Missing HTTP identifier\n");
        xfree(inbuf);
        return -1;
    }

but I've not spent time looking to see how easy it is to suppress
sending out the headers to the client. A complete fix would
probably also make the request to the remote server as 1.0: I
think Squid forces that anyway, already)

-- jrg.
Received on Thu Aug 29 1996 - 02:02:07 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:32:52 MST