[squid-users] Tproxy immediately closing connection

From: Jan Krupa <jan_at_krupa.pw>
Date: Fri, 25 Jul 2014 00:02:08 +0200

Hi all,

I've been struggling to configure transparent proxy for IPv6 on my
Raspberry Pi acting as a router following the guide:
http://wiki.squid-cache.org/Features/Tproxy4

Despite all my efforts, all I got was squid squid immediately closing
connection after it was established (not rejecting connection, three-way
handshake is successful and then the client receives RST packet).

For example:

root_at_rpi:~# telnet ::1 3129
Trying 2001:470:71:604::1...
Connected to 2001:470:71:604::1.
Escape character is '^]'.
Connection closed by foreign host.

I tried different systems and the results are as follows:

Raspbian (debian-based) 3.12.22 kernel, squid 3.4.6 - unsuccessful
Centos 6.5 2.6.32 kernel, squid 3.4.6 - unsuccessful
Fedora 20 3.12 kernel - successful
Ubuntu 14.04 3.13 kernel, squid 3.3.6 - successful

Configuration is just default with tproxy port added:

acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow localhost
http_access deny all
http_port 3128
http_port 3129 tproxy
coredump_dir /var/spool/squid3
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern (Release|Packages(.gz)*)$ 0 20% 2880
refresh_pattern . 0 20% 4320

I dumped the logs from Centos 6.5 to see what's going on:

The most verbose squid cache log (-X option):

2014/07/24 23:36:11.147| TcpAcceptor.cc(220) doAccept: New connection on
FD 10
2014/07/24 23:36:11.147| TcpAcceptor.cc(295) acceptNext: connection on
local=[::]:3129 remote=[::] FD 10 flags=25
2014/07/24 23:36:11.147| fd.cc(221) fd_open: fd_open() FD 8 HTTP Request
2014/07/24 23:36:11.147| Intercept.cc(371) Lookup: address BEGIN:
me/client= [::1]:3129, destination/me= [::1]:36047
2014/07/24 23:36:11.147| TcpAcceptor.cc(287) acceptOne: Listener:
local=[::]:3129 remote=[::] FD 10 flags=25 accepted new connection
local=[::1]:3129 remote=[::1]:36047 FD 8 flags=17 handler Subscription:
0x29f9a20*1
2014/07/24 23:36:11.147| AsyncCall.cc(18) AsyncCall: The AsyncCall
httpAccept constructed, this=0x25d6bb0 [call40]
2014/07/24 23:36:11.147| cbdata.cc(419) cbdataInternalLock: cbdataLock:
0x257c398=3
2014/07/24 23:36:11.147| cbdata.cc(419) cbdataInternalLock: cbdataLock:
0x257c398=4
2014/07/24 23:36:11.147| cbdata.cc(510) cbdataReferenceValid:
cbdataReferenceValid: 0x257c398
2014/07/24 23:36:11.147| cbdata.cc(419) cbdataInternalLock: cbdataLock:
0x257c398=5
2014/07/24 23:36:11.147| cbdata.cc(456) cbdataInternalUnlock:
cbdataUnlock: 0x257c398=4
2014/07/24 23:36:11.147| AsyncCall.cc(85) ScheduleCall:
TcpAcceptor.cc(317) will call httpAccept(local=[::1]:3129
remote=[::1]:36047 FD 8 flags=17, flag=-1, data=0x257c398, MXID_1) [call40]
2014/07/24 23:36:11.147| ModEpoll.cc(139) SetSelect: FD 10, type=1,
handler=1, client_data=0x29fb108, timeout=0
2014/07/24 23:36:11.147| AsyncCallQueue.cc(51) fireNext: entering
httpAccept(local=[::1]:3129 remote=[::1]:36047 FD 8 flags=17, flag=-1,
data=0x257c398, MXID_1)
2014/07/24 23:36:11.147| AsyncCall.cc(30) make: make call httpAccept
[call40]
2014/07/24 23:36:11.147| cbdata.cc(510) cbdataReferenceValid:
cbdataReferenceValid: 0x257c398
2014/07/24 23:36:11.147| cbdata.cc(510) cbdataReferenceValid:
cbdataReferenceValid: 0x257c398
2014/07/24 23:36:11.147| cbdata.cc(419) cbdataInternalLock: cbdataLock:
0x257c398=5
2014/07/24 23:36:11.147| cbdata.cc(510) cbdataReferenceValid:
cbdataReferenceValid: 0x257c398
2014/07/24 23:36:11.147| client_side.cc(3408) httpAccept:
reentrant debuging 2-{cbdata.cc(510) cbdataReferenceValid:
cbdataReferenceValid: 0x257c398}-2
httpAccept: local=[::]:3129 remote=[::] FD 10 flags=25: accept failure:
(0) No error.
2014/07/24 23:36:11.147| cbdata.cc(456) cbdataInternalUnlock:
cbdataUnlock: 0x257c398=4
2014/07/24 23:36:11.147| AsyncCallQueue.cc(53) fireNext: leaving
httpAccept(local=[::1]:3129 remote=[::1]:36047 FD 8 flags=17, flag=-1,
data=0x257c398, MXID_1)
2014/07/24 23:36:11.147| cbdata.cc(456) cbdataInternalUnlock:
cbdataUnlock: 0x257c398=3
2014/07/24 23:36:11.147| cbdata.cc(456) cbdataInternalUnlock:
cbdataUnlock: 0x257c398=2
2014/07/24 23:36:11.147| Connection.cc(33) ~Connection: BUG #3329:
Orphan Comm::Connection: local=[::1]:3129 remote=[::1]:36047 FD 8 flags=17
2014/07/24 23:36:11.147| Connection.cc(34) ~Connection: NOTE: 1 Orphans
since last started.
2014/07/24 23:36:11.148| comm.cc(1080) _comm_close: comm_close: start
closing FD 8

The most interesting, but not very helpful is "accept failure: (0) No
error."

Strace log:

[pid 24762] accept(10, {sa_family=AF_INET6, sin6_port=htons(34183),
inet_pton(AF_INET6, "::1", &sin6_addr), sin6_flowinfo=0,
sin6_scope_id=0}, [28]) = 8
[pid 24762] getsockname(8, {sa_family=AF_INET6, sin6_port=htons(3129),
inet_pton(AF_INET6, "::1", &sin6_addr), sin6_flowinfo=0,
sin6_scope_id=0}, [28]) = 0
[pid 24762] fcntl(8, F_GETFD) = 0
[pid 24762] fcntl(8, F_SETFD, FD_CLOEXEC) = 0
[pid 24762] fcntl(8, F_GETFL) = 0x2 (flags O_RDWR)
[pid 24762] fcntl(8, F_SETFL, O_RDWR|O_NONBLOCK) = 0
[pid 24762] gettimeofday({1406198332, 369146}, NULL) = 0
[pid 24762] read(8, 0x7fff6fc443f0, 65535) = -1 EAGAIN (Resource
temporarily unavailable)
[pid 24762] close(8) = 0

Something seems to be wrong with connection handling. I don't know if
it's kernel's fault or squid fault. I don't know what to think of it.
Received on Thu Jul 24 2014 - 22:02:18 MDT

This archive was generated by hypermail 2.2.0 : Sat Jul 26 2014 - 12:00:05 MDT