Re: [squid-users] Frustrating "Invalid Request" Reply

From: Amos Jeffries <squid3_at_treenet.co.nz>
Date: Wed, 02 Mar 2011 12:02:17 +1300

 On Tue, 1 Mar 2011 17:56:02 +0200, Ümit Kablan wrote:
> Hi,
>
> 2011/2/28 Amos Jeffries <squid3_at_treenet.co.nz>:
>> On Mon, 28 Feb 2011 16:51:54 +0200, Ümit Kablan wrote:
>>>
>>> Hi, Sorry for the late reply,
>>>
>> <snip>
>>>
>>> Enter the full phrase and hit enter: [192.168.1.10 ->
>>> 192.168.1.120]
>>>
>>> GET
>>>
>>>
>>>
>>> /search?hl=tr&source=hp&biw=1280&bih=897&q=ertex&aq=2&aqi=g10&aql=&oq=ert&fp=3405898bc8895081&tch=1&ech=1&psi=_LBrTd6iFM-o8QPm5P3tDA12989033090755&safe=active
>>> HTTP/1.1
>>> Host: www.google.com.tr
>>> Proxy-Connection: keep-alive
>>> Referer: http://www.google.com.tr/
>>> Accept: */*
>>> User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US)
>>> AppleWebKit/534.10 (KHTML, like Gecko) Chrome/8.0.552.224
>>> Safari/534.10
>>> Accept-Encoding: gzip,deflate,sdch
>>> Accept-Language: tr-TR,tr;q=0.8,en-US;q=0.6,en;q=0.4
>>> Accept-Charset: ISO-8859-9,utf-8;q=0.7,*;q=0.3
>>> Cookie:
>>>
>>>
>>>
>>> NID=44=WDrVJT3IHROI8LLhYljiGzpNonvug9envnNeEoo6qdVxw1B1eHwarlfgZgODzoTsj7i7QGza5luXEqgQuFx7eWduz3Pcc-8IFrLp8tTyIaJC9VgyXEyQAv0qBQD3Dxm9;
>>>
>>>
>>>
>>> PREF=ID=e5ce72ddfd5e542a:U=0163fee991eaa35b:FF=0:TM=1298386459:LM=1298903279:S=6Sakp_hgUHZXMW1W
>>>
>>> [192.168.1.120 -> 192.168.1.10]
>>>
>>> HTTP/1.0 400 Bad Request
>>> Server: squid/2.7.STABLE8
>>> Date: Mon, 28 Feb 2011 14:30:43 GMT
>>> Content-Type: text/html
>>> Content-Length: 2044
>>> X-Squid-Error: ERR_INVALID_REQ 0
>>> X-Cache: MISS from kiemserver
>>> X-Cache-Lookup: NONE from kiemserver:3128
>>> Via: 1.0 kiemserver:3128 (squid/2.7.STABLE8)
>>> Connection: close
>>>
>>> Last is the weird part. It crops the full url and it thinks it is
>>> talking directly to the origin as you already said. Or I am
>>> skipping
>>> something obvious.
>>
>>
>> I'm still convinced this is some form of configuration mistake
>> somewhere.
>> Lets step through this piece by piece in detail and see if anything
>> appears.
>>
> Hard to stay sane but OK :-)
>
>> Which browser are you using to test with?
>>  What proxy settings are entered into its control panel?
>
> I tried it with Mozilla Firefox 3.6.13 by writing 192.168.1.10 port
> 3128 to the Prefereces > Network > Configuration. Configured Internet
> Explorer by Tools > Internet Options > Connections > Local Network
> Configuration and typing proxy IP and PORT. Google Chrome acquires
> the
> options from system so it is the same as IE.

 Good.

  Clicking "use HTTP settings for all protocols" as well?

>
>>
>> What does the client hosts file contain?
>> What does the client resolv.conf or equivalent Windows network
>> connection
>> settings contain as gateway router, domain, and DNS servers?
>
> Client is windows configured to use a static IP 192.168.1.120 with
> 255.255.255.0 subnet mask and 192.168.1.1 gateway (and dns). Hosts
> equivalent is name address mapping I assume, which I found nothing
> about (except 127.0.0.1 <-> localhost I guess)

 Good.

 Okay, next steps ... (please check these answers in case something has
 been forgotten or overlooked)

  Are there any NAT, NAPT, Port Forwarding, or Connection Sharing
 settings on the client box?
    if so what are they?

  Same question again for the LAN router?

  Same question again for the Squid box?

  Also, is there any black-box filtering device or service between the
 client and Squid boxes?

  Is there any "Web Security" firewall on the client box (is Semantec or
 McAfee filters)?
    if so what are its outward proxy relay settings?

>
> I sometimes think that the javascript makes an explicit request which
> leads to that misinterpretation by the browsers. I have no strong

 JS does all sorts of stuff. In your case it appears to be the only
 working traffic though. The google click-search requests are JS
 background connections.

> clues about though. And I also want to ask if we can make some
> workaroud from proxy layer without involving browsers. As I
> previously
> said: Can Squid Proxy fix such bad requests by concaneting other
> fields from HTTP request to build the correct URL?
>

 A temporary workaround is to set "transparent" on the port. This will
 fill your logs with NAT lookup failures though and still get nowhere
 towards finding the real solution or what has gone wrong.

 Amos
Received on Tue Mar 01 2011 - 23:03:25 MST

This archive was generated by hypermail 2.2.0 : Wed Mar 02 2011 - 12:00:01 MST