Re: [squid-users] Re: squid 3.2.0.14 with TPROXY => commBind: Cannot bind socket FD 773 to xxx.xxx.xxx.xx: (98) Address

From: Amos Jeffries <squid3_at_treenet.co.nz>
Date: Sun, 25 Aug 2013 00:55:02 +1200

On 24/08/2013 9:45 p.m., x-man wrote:
> Hi Amos,
>
> I have exactly the same issue as the above described.
>
> Running squid 3.3.8 in TPROXY mode.
>
> In my setup the squid is serving around 10000 online subscribers, and this
> problem happens when i put the whole HTTP traffic. If I'm redirecting only
> half of the users - then it works fine.
>
> I guess it's something related to LIMITS imposed by the OS or the Squid
> itself. Please help to identify the exact bottleneck if this issue, because
> this is scalability issue.
>
> squidclient mgr:info |grep HTTP
> HTTP/1.1 200 OK
> Number of HTTP requests received: 1454792
> Average HTTP requests per minute since start: 116719.5

Nice. With stats like these would you mind supplying the data necessary
for an entry in this page?
  http://wiki.squid-cache.org/KnowledgeBase/Benchmarks
(see section 2 for how to calculate the datum).

> squidclient mgr:info |grep file
> Maximum number of file descriptors: 524288
> Largest file desc currently in use: 132904
> Number of file desc currently in use: 80893
> Available number of file descriptors: 443395
> Reserved number of file descriptors: 800
> Store Disk files open: 0
>
> ulimit -a from the OS
>
> core file size (blocks, -c) unlimited
> data seg size (kbytes, -d) unlimited
> scheduling priority (-e) 0
> file size (blocks, -f) unlimited
> pending signals (-i) 386229
> max locked memory (kbytes, -l) 64
> max memory size (kbytes, -m) unlimited
> open files (-n) 1000000
> pipe size (512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> real-time priority (-r) 0
> stack size (kbytes, -s) 8192
> cpu time (seconds, -t) unlimited
> max user processes (-u) 386229
> virtual memory (kbytes, -v) unlimited
> file locks (-x) unlimited
>
> Some tunings applied also, but not helping much:
>
> echo "applying specific tunings"
> echo 500 65535 > /proc/sys/net/ipv4/ip_local_port_range
> echo 65000 > /proc/sys/net/ipv4/tcp_max_syn_backlog
> echo 600 > /proc/sys/net/ipv4/tcp_keepalive_time
> echo 50000 > /proc/sys/net/core/netdev_max_backlog
>
> echo 15 > /proc/sys/net/ipv4/tcp_keepalive_intvl
> echo 5 > /proc/sys/net/ipv4/tcp_keepalive_probes
> echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse # it's ok
> echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle # it's ok
> echo 2000000 > /proc/sys/net/ipv4/tcp_max_tw_buckets #default 262144 on
> Ubuntu
>
>
> Let me know what other info might be useful for you?
>

Unfortunately all I can do is point you at the known reasons for the
message.
The things to figure out is whether there is some limit in TPROXY kernel
code itself (the socket match module is the critical point I think)
about how many sockets it can manage. Or if some of the traffic is
coming an excessive amounts from any particular IPs and reducing the
amount of outgoing connections that can be used for it.

Amos
Received on Sat Aug 24 2013 - 12:55:22 MDT

This archive was generated by hypermail 2.2.0 : Sat Aug 24 2013 - 12:00:07 MDT