Re: [squid-users] Forwarding loop detected

From: Edoardo COSTA SANSEVERINO <edoardo.costa_at_gmail.com>
Date: Tue, 29 Jun 2010 18:36:45 +0200

On 06/29/2010 01:07 PM, Amos Jeffries wrote:
> Edoardo COSTA SANSEVERINO wrote:
>> Hi all,
>>
>> I'm getting the following error and I just can't figure out what I'm
>> doing wrong. It worked for a while but now i get the following error:
>>
>> Browser error
>> -------------
>> ERROR
>> The requested URL could not be retrieved
>>
>> While trying to retrieve the URL: http://test.example.com/
>>
>> The following error was encountered:
>>
>> * Access Denied.
>>
>> Access control configuration prevents your request from being
>> allowed at this time. Please contact your service provider if you
>> feel this is incorrect.
>>
>> Your cache administrator is webmaster.
>> Generated Tue, 29 Jun 2010 08:01:45 GMT by localhost (squid/3.0.STABLE8)
>>
>>
>> Squid Error
>> -----------
>> 2010/06/29 07:41:22.244| The request GET http://test.example.com/ is
>> ALLOWED, because it matched 'sites_server_web'
>> 2010/06/29 07:41:22.244| WARNING: Forwarding loop detected for:
>> GET / HTTP/1.0
>> Host: test.example.com
>> User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.3)
>> Gecko/20100423 Ubuntu/10.04 (lucid) Firefox/3.6.3
>> Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
>> Accept-Language: en-us,en;q=0.5
>> Accept-Encoding: gzip,deflate
>> Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
>> Referer: http://test.example.com/
>> Cookie:
>> __utma=156214138.2072416337.1256440668.1263421087.1270454401.17;
>> SESS404422c7e13985ed9850bca1343102d6=e6b996d3bf323193fec6e785a3356d1c; SESS4986f0d90a6abbc6006cc25a814fe1a8=1c1956864db4e7636f3e8b185b6dd6cc
>>
>> Pragma: no-cache
>> Via: 1.1 localhost (squid/3.0.STABLE8)
>> X-Forwarded-For: 192.168.1.10
>> Cache-Control: no-cache, max-age=259200
>> Connection: keep-alive
>>
>>
>> 2010/06/29 07:41:22.245| The reply for GET http://test.example.com/
>> is ALLOWED, because it matched 'sites_server_web'
>>
>>
>> My current setup is as follows. I made the page request on the
>> laptop to [VMs1].
>>
>>
>> setup
>> -----
>>
>>
>> [VMs1]--[Server/Squid/DNS/FW 1]--{ Internet }---[Server/Squid/DNS/FW
>> 2]-+--[VMs2]
>>
>> |
>>
>> +--[LAN]--[Laptop]
>>
>
> Diagram got a bit mangled. I'm guessing the Laptop was on network VMs1?
>
>>
>>
>> The following squid config is for [Server 1]
>>
>> squid.conf
>> ----------
>> https_port 91.185.133.180:443 accel
>> cert=/etc/ssl/mail.example.com.crt key=/etc/ssl/mail.example.com.pem
>> defaultsite=mail.example.com vhost protocol=https
>> http_port 91.185.133.180:80 accel defaultsite=test.example.com vhost
>>
>> cache_peer 192.168.122.11 parent 443 0 no-query no-digest
>> originserver login=PASS ssl sslversion=3 sslflags=DONT_VERIFY_PEER
>> front-end-https=on name=server_mail
>> cache_peer 192.168.122.12 parent 80 0 no-query originserver
>> login=PASS name=server_web
>>
>> acl sites_server_mail dstdomain mail.example.com
>> http_access allow sites_server_mail
>> cache_peer_access server_mail allow sites_server_mail
>> cache_peer_access server_mail deny all
>>
>> acl sites_server_web dstdomain test.example.com test.foobar.eu
>> test1.example.com
>> http_access allow sites_server_web
>> cache_peer_access server_web allow sites_server_web
>> cache_peer_access server_web deny all
>>
>> forwarded_for on
>>
>> cache_store_log none
>> debug_options ALL,2
>>
>>
>> The following config is for [Server 2]
>>
>> squid.conf
>> ----------
>> https_port 192.168.1.3:443 accel
>> cert=/etc/ssl/certs/deb03.example.com.crt
>> key=/etc/ssl/private/deb03.example.com.pem
>> defaultsite=deb03.example.com vhost protocol=https
>> http_port 192.168.1.1:80 accel defaultsite=deb02.example.com vhost
>> http_port 192.168.1.1:80 accel defaultsite=oldwww.example.com vhost
>>
>> cache_peer 192.168.122.3 parent 443 0 no-query originserver
>> login=PASS ssl sslversion=3 sslflags=DONT_VERIFY_PEER
>> front-end-https=on name=srv03
>> cache_peer 192.168.122.2 parent 80 0 no-query originserver name=srv02
>> cache_peer 192.168.122.11 parent 80 0 no-query originserver name=srv01
>>
>> acl https proto https
>> acl sites_srv01 dstdomain oldwww.example.com
>> acl sites_srv03 dstdomain deb03.example.com
>> acl sites_srv02 dstdomain deb02.example.com second.example.com
>>
>> http_access allow sites_srv01
>> http_access allow sites_srv03
>> http_access allow sites_srv02
>> cache_peer_access srv01 allow sites_srv01
>> cache_peer_access srv03 allow sites_srv03
>> cache_peer_access srv02 allow sites_srv02
>>
>> forwarded_for on
>>
>> ### Transparent proxy
>> http_port 192.168.1.1:3128 transparent
>> acl lan_network src 192.168.1.0/24
>> acl localnet src 127.0.0.1/255.255.255.255
>> http_access allow lan_network
>> http_access allow localnet
>>
>> cache_dir ufs /var/spool/squid3 1500 16 256
>> ###
>>
>> #cache_store_log none
>> debug_options ALL,2
>>
>>
>> I simply can't see where the loop is. Could someone explain this to
>> me or point me to the right documentation. I had a look arround but
>> found no relevant answer.
>
> There are two things which may be happening:
>
> 1) Your NAT interception rules may be catching proxy #2 outbound
> requests and looping it back into #2.
> ** FIX: Make sure that all the proxy machine IPv4 are listed in the
> NAT bypass rules.
>
> 2) to identify a loop Squid uses the _unique_ machine name as
> displayed in the Via: header "1.1 localhost (squid/3.0.STABLE8)" to
> check that the request did not come from itself. Unfortunately the
> machine hostname is set to "localhost" which is actually harmful as
> you can see.
> ** FIX: ensure that the command "hostname" produces a unique name
> for each machine.
> ** WORKAROUND for distros which hard-code "localhost":
> explicitly configure unique_hostname and/or visible_hostname to
> different things in each of the proxies.
>
> Good practice is to use the machine FQDN for uniqueness.
>
> Amos
Hi Amos,

The problem was indeed related to hostnames. I used 'visible_hostname'
and that seems to have solved the problem.

Admittedly, I found the info in a squid archive. I'd searched the whole
web before looking closer ;)

Thanks for your help.
  -Ed
Received on Tue Jun 29 2010 - 16:37:02 MDT

This archive was generated by hypermail 2.2.0 : Tue Jun 29 2010 - 12:00:03 MDT