Re: [squid-users] Multiple requests from one user has stopped service

From: Joe Cooper <joe@dont-contact.us>
Date: Wed, 28 Apr 2004 14:23:33 -0500

Yes. This is exactly the kind of thing that is helped by disabling
half_closed_clients. As I mentioned, I found it alleviated the need to
increase file descriptors in all cases, for my clients. It is still a
problem, however, in that the virus still causes an increased load on
the proxy. So, if you're pushing your box close to the limit, this
might push it over it.

Increasing file descriptors is pretty well documented in the Squid FAQ.
  I haven't rolled a Red Hat or Fedora stock RPM lately, so I won't
point you to my RPMs of Squid as a file descriptor solution. But they
are always compiled with 8192 FDs, though the initscript does not enable
it--but I reckon I'll change that for the next build. Maybe this weekend.

Jason McNeil wrote:
>
> Thanks Joe, I may give that a shot, as well as increasing the file
> descriptors, though I don't rightly how, as our squid was installed as a
> binary rpm package.
>
> This is cur from my cache.log when I tried to restart squid, does it
> look like the kind of thing that may be helped by turning off
> half_closed_clients?
>
> 2004/04/27 15:03:17| WARNING! Your cache is running out of filedescriptors
> 2004/04/27 15:03:33| WARNING! Your cache is running out of filedescriptors
> 2004/04/27 15:03:49| WARNING! Your cache is running out of filedescriptors
> 2004/04/27 15:04:05| WARNING! Your cache is running out of filedescriptors
> 2004/04/27 15:04:21| WARNING! Your cache is running out of filedescriptors
> 2004/04/27 15:04:37| WARNING! Your cache is running out of filedescriptors
> 2004/04/27 15:04:44| Preparing for shutdown after 1676825 requests
> 2004/04/27 15:04:44| Waiting 30 seconds for active connections to finish
> 2004/04/27 15:04:44| FD 12 Closing HTTP connection
> 2004/04/27 15:05:15| Shutting down...
> 2004/04/27 15:05:15| FD 13 Closing ICP connection
> 2004/04/27 15:05:15| FD 14 Closing SNMP socket
> 2004/04/27 15:05:15| WARNING: Closing client 192.75.95.xxx connection
> due to lifetime timeout
> 2004/04/27 15:05:15| http://25.110.18.42/
> 2004/04/27 15:05:15| WARNING: Closing client 192.75.95.xxx connection
> due to lifetime timeout
> 2004/04/27 15:05:15| http://2.229.221.179/
> 2004/04/27 15:05:15| WARNING: Closing client 192.75.95.xxx connection
> due to lifetime timeout
> 2004/04/27 15:05:15| http://12.74.158.195/
> 2004/04/27 15:05:15| WARNING: Closing client 192.75.95.xxx connection
> due to lifetime timeout
> 2004/04/27 15:05:15| http://203.106.220.212/
> 2004/04/27 15:05:15| WARNING: Closing client 192.75.95.xxx connection
> due to lifetime timeout
> 2004/04/27 15:05:15| http://59.1.102.110/
>
>
> and it goes on and on for about 5 pages of this guys ip disconnecting.
> It certainly looks like what you suggested.
>
>
> Joe Cooper wrote:
>
>> Jason McNeil wrote:
>>
>>> Hello there, yesterday one of our users going through the squid cache
>>> machine (which is also our networks gateway) began attempting to
>>> connect to random ip addresses what seemed randomly, and far faster
>>> then humanly possible. We suspect he had either some sort of trojan
>>> or virus. The problem is that as he was attempting to connect to all
>>> these non-existant ip addresses, the squid service hung for everyone
>>> else trying to use it (about another 50 users). Normally we have no
>>> problems traffic or workload on our server, yet this completely
>>> disabled any traffic going through port 80. In the cache.log it began
>>> to print out "WARNING! You are running out of file descriptors"
>>> repeatedly until we disconnected the user and restarted squid. I
>>> guess what I'm asking is:
>>>
>>> a) Has anything like this happened to anyone else?
>>> b) Would increasing the number of file descriptors help to avoid this
>>> problem in the future?
>>> c) Is there any way to limit the number of requests from a certain ip
>>> address in a certain amount of time?
>>
>>
>>
>> I've found that changing "half_closed_clients" to off reduces the
>> impact of this virus (it is yet another Windows virus making the
>> rounds) significantly, though it still has an impact. It hasn't been
>> necessary, so far, to raise file descriptors on even our most heavily
>> loaded boxes. But our default installation is compiled with 8192 file
>> descriptors instead of the usual 1024.
>>
>> I would like to find an appropriate firewall rule to block this at the
>> network layer, since it still has a performance impact...but since the
>> destination is seemingly random (though there is a variant that only
>> targets www.microsoft.com, which can also be problematic when the MS
>> DoS prevention kicks in and blocks the proxy from access for a while).
>
>
Received on Wed Apr 28 2004 - 13:24:40 MDT

This archive was generated by hypermail pre-2.1.9 : Fri Apr 30 2004 - 12:00:03 MDT