Re: [squid-users] Squid Non-Responsive With generate-host-certificates.

From: Amos Jeffries <squid3_at_treenet.co.nz>
Date: Sat, 19 Apr 2014 03:14:15 +1200

On 18/04/2014 2:13 p.m., Ethan H wrote:
> Hi,
>
> I already posted this but no one responded - I’m guessing that I
> posted too much of my config file and too much of my log. Now, I just
> included what is important to fix the problem and if you want complete
> files posted I can.
>
> I recently configured Squid to ssl-bump connections and dynamically
> generate certificates. I am running Squid 3.3.3 on Ubuntu 13.10.
>
> —————————————————————————————
>
> ssl_bump server-first
>
> #Devices configured to use the proxy. No interception for HTTPS
> http_port 3128
> https_port 3128 cert=/usr/ssl/myCA.pem
>

You have used port 3128 twice for two very different protocols.
That is a recipe for disaster, which it appears is happening.

Note that operating system protection about sockets being opened twice
does *not* apply to SMP enabled Squid (any recent version) when the
socket is already opened by another Squid.

> #Devices configured to use the proxy. Interception for HTTPS
> http_port 3129 ssl-bump generate-host-certificates=on
> dynamic_cert_mem_cache_size=4MB cert=/usr/ssl/myCA.pem
>
> #Devices unconfigured to use the proxy. Sent by the router.
> http_port 3127 intercept ssl-bump cert=/usr/ssl/myCA.pem
> https_port 3126 intercept ssl-bump cert=/usr/ssl/myCA.pem
>
> —————————————————————————————
>
> Squid worked flawlessly until I added the http_port 3129 line with the
> ssl-bump and generate host certificates. After this, Squid now crashes
> anywhere from 1 - 12 hours. Here is part of my cache.log file
>
> —————————————————————————————
>
>
> 2014/04/12 21:40:37 kid1| Accepting HTTP Socket connections at
> local=[::]:3128 remote=[::] FD 11 flags=9
>
> 2014/04/12 21:40:37 kid1| Accepting SSL bumped HTTP Socket connections
> at local=[::]:3129 remote=[::] FD 12 flags=9
>
> 2014/04/12 21:40:37 kid1| Accepting NAT intercepted SSL bumped HTTP
> Socket connections at local=0.0.0.0:3127 remote=[::] FD 13 flags=41
>
> 2014/04/12 21:40:37 kid1| Accepting HTTPS Socket connections at
> local=[::]:3128 remote=[::] FD 14 flags=9
>
> 2014/04/12 21:40:37 kid1| Accepting NAT intercepted SSL bumped HTTPS
> Socket connections at local=0.0.0.0:3126 remote=[::] FD 15 flags=41
>
> 2014/04/12 21:40:37 kid1| ERROR: listen( FD 14, [::] [ job9832],
> 1024): (98) Address already in use
>
> 2014/04/12 21:50:56 kid1| clientNegotiateSSL: Error negotiating SSL
> connection on FD 30: error:1407609C:SSL
> routines:SSL23_GET_CLIENT_HELLO:http request (1/-1)

A client sent an HTTP request to Squid port 3128. The HTTP*S* listening
port code received it.

All these HTTPS failures mean TCP sockets (FD) bumped into TIME_WAIT
status unnecessarily.

>
> ***MESSAGE ABOVE REPEATED MULTIPLE TIMES
>
> 2014/04/12 21:56:29 kid1| WARNING: HTTP: Invalid Response: No object
> data received for https://www.facebook.com/connect/ping
>

That one looks like your traffic interception or routing system is
perhapse looping or otherwise srewing up the outbound connections.

All these failures mean TCP sockets (FD) are getting stuck for
reasonably long durations waiting for remote responses which never come.
Consuming 2x FD for the entire time, then bumping one of them into
additional TCP TIME_WAIT delays.

> ***MESSAGE ABOVE REPEATED MULTIPLE TIMES
>
> 2014/04/13 22:08:08 kid1| WARNING! Your cache is running out of filedescriptors
>
> ***MESSAGE ABOVE REPEATED MULTIPLE TIMES
>
> 2014/04/13 22:13:08 kid1| NF getsockopt(SO_ORIGINAL_DST) failed on
> local=192.168.0.10:3126 remote=192.168.0.49:39402 FD 62 flags=33: (2)
> No such file or directory

One of your intercept listening ports is receiving traffic which was not
intercepted by the local machines NAT system.

All these HTTPS failures mean TCP sockets (FD) bumped into TIME_WAIT
status unnecessarily. Possibly also meaning Squid attempts connections
to itself to fetch data. And guess what, that means unknown number of
extra FD consumed quickly. If Squid detects and fixed that they get
bumped in to TIME_WAIT to recover. Either way a bunch more FDs wasted.

>
> ***MESSAGE ABOVE REPEATED MULTIPLE TIMES
>
> 2014/04/13 22:22:14 kid1| WARNING! Your cache is running out of filedescriptors

... and surprise, surprise in 30-35 minutes your box has committed a DoS
on itself.

>
> ————————————————————————————————————
>
> I’m thinking it is crashing from the lack of file descriptors. I
> changed my configuration file to give it 4096 file descriptors and the
> cache.log confirms this when starting up. I would really appreciate
> any ideas that anyone might have to fix this problem.

Either way they are being consumed much faster than they should be and
the box DoS's itself. Squid *should* be able to cope with all of the
above, just becomes .... really ..... really ...... slooowwww.

Amos
Received on Fri Apr 18 2014 - 15:14:26 MDT

This archive was generated by hypermail 2.2.0 : Fri Apr 18 2014 - 12:00:06 MDT