Re: [PATCH] Consumption of POST body

From: Tsantilas Christos <chtsanti_at_users.sourceforge.net>
Date: Tue, 22 Jan 2013 10:55:00 +0200

On 01/22/2013 05:05 AM, Amos Jeffries wrote:
> On 22/01/2013 9:15 a.m., Tsantilas Christos wrote:
>> On 01/18/2013 08:37 PM, Alex Rousskov wrote:
>>> On 01/18/2013 08:14 AM, Steve Hill wrote:
>>>
>>>> The attached patch enables autoconsumption of the BodyPipe if access
>>>> isn't allowed in order to discard the request body rather than letting
>>>> it fill the buffers.
>>> Please note that the patch enables auto-consumption for all callout
>>> errors, not just access denials.
>>>
>>> Also, the trunk code is even more complex in this area. I believe your
>>> patch, when carefully ported to trunk, may enable auto-consumption for
>>> all callout errors *that have readNextRequest set*. That makes sense to
>>> me: if we are going to read the next request, we should consume the
>>> current one without closing the client connection so we should not call
>>> expectNoForwarding() that may have that side effect.
>>>
>>>
>>> The overall intent sounds correct to me. Even if we do not want to keep
>>> the connection open after responding with an error, it may be impossible
>>> to deliver the error message to a half-broken client that does not read
>>> while writing to us.
>>>
>>> However, this brings us back to trunk r11920 and bug 3420. While that
>>> fix was triggered by an assertion, it is clear from the code comments
>>> and the commit message that we were worried about a client tying Squid
>>> client connection "forever" (with or without sending data). Your changes
>>> will re-enable that behavior in some cases, I guess. More on that below.
>>>
>>>
>>>> I'm not entirely sure this is the best fix - calling
>>>> expectNoForwarding() would seem to be neater, but this also causes the
>>>> client-side socket to close, which would break authentication
>>>> mechanisms
>>>> that require the connection to be kept alive. Hopefully someone with a
>>>> better understanding of the Squid internals can review the patch and
>>>> tell me if it is the right approach?
>>> Thank you very much for treading carefully and disclosing potential
>>> problems!
>>>
>>> Indeed, having two methods with approximately the same purpose and a
>>> hidden critical distinction is asking for trouble. A lot of existing
>>> expectNoForwarding-path code comments describe the situation that
>>> matches yours. Moreover, as far as I can see, we may call the old
>>> expectNoForwarding() first and then call your forwardToBitbucket()
>>> later, all for the same error!
>>>
>>> As far as I can tell, we have several cases to take care of (or ignore)
>>> here:
>>>
>>> 1. Authentication: Must read next request on the same connection so must
>>> have readNextRequest set (if not, this is probably a bug). Have to live
>>> with the possibility of the client sending forever.
>> In this case we may have problem only for NTLM or similar authentication.
>> But in this case imagine that the HTTP client post a huge file and Squid
>> respond with an "401 Unauthorised". Is it normal to read all of the
>> file in this case?
>
> "Maybe.". Some clients work okay some don't. Like IE6 requires it and
> Sharepoint, Exchange, etc do not work if its sent.
>
>> We are going to read all of the file in the next
>> authenticated request too. So for one request we have to read ALL the
>> posted file twice. It is not normal.
>> I believe that the best is to send the error page and abort. The browser
>> should handle early replies to support such cases...
>>
>> Also the curl version I have supports early replies, I used custom perl
>> client to reproduce the problem...
>
> HTTP allows us to close a connection at any time. For NTLM handshakes we
> can (configurably) close the connection after the first 401 message, but
> not the second, and are again on the third request - which should have
> succeeded.

Yep. But this is a prof that the connection authentication schemes are
problematic....

What I want to say is that if we read all of the body after respond with
an error we may have the following problem:
 - The system admin had deny the access to the web for me.
 - I am making jokes sending 50-100 huge files to his proxy over access
denied connections. (or opening 70000 connection and sending 1 bytes per
minute as you said)

But this is a problem exist only in NTLM and similar authentication
methods. In the other cases we do not have any (huge) problem to just
abort the connection ...
Maybe a configuration options is a good idea...

>
> Amos
>
>
Received on Tue Jan 22 2013 - 08:55:19 MST

This archive was generated by hypermail 2.2.0 : Tue Jan 22 2013 - 12:00:07 MST