RE: [squid-users] RE: Too Many Open File Descriptors

From: Justin Lawler <jlawler_at_amdocs.com>
Date: Wed, 10 Aug 2011 12:12:14 +0300

Thanks again for this.

Yes, just talking internally - squid was recompiled with FD set to 2048.

So to confirm - you think we should update this value to 65535 & recompile? Could we get away with a lower value - say 4096?

If we did this, would we need to use the ulimit in a startup script?

Thanks and regards,
Justin



-----Original Message-----
From: Amos Jeffries [mailto:squid3_at_treenet.co.nz]
Sent: Wednesday, August 10, 2011 2:34 PM
To: squid-users_at_squid-cache.org
Subject: RE: [squid-users] RE: Too Many Open File Descriptors

 On Wed, 10 Aug 2011 08:59:08 +0300, Justin Lawler wrote:
> Hi,
>
> Thanks for this. Is this a known issue? Is there any bugs/articles on
> this? Just we would need something more concrete to go to the
> customer
> with on this issue - more of a background on this issue would be very
> helpful.


 http://onlamp.com/pub/a/onlamp/2004/02/12/squid.html

 Each ICAP service the client request passes through counts as an FD
 consuming external helper.

>
> Is 2048 FD's enough? Is there any connection leaks? Does squid ignore
> this 2048 value?

 The fact that you are here asking that question is proof that no, its
 not (for you).

>
> The OS has FD limits as below - so would have thought current
> configuration should be ok?
> set rlim_fd_max=65536
> set rlim_fd_cur=8192

 Only if squid is not configured with a lower number. As appears to be
 the case.
 As proof, the manager report from inside squid:
  "Maximum number of file descriptors: 2048"

 Squid could have been built with an absolute 2048 limit hard coded by
 the configure options.
 Squid could have been started by an init script which lowered the
 available from the OS default to 2048.

 You say its 3.0, which does not support configurable FD limits in
 squid.conf. So that alternative is out.

 Amos

>
>
> Thanks,
> Justin
>
>
> -----Original Message-----
> From: Amos Jeffries [mailto:squid3_at_treenet.co.nz]
> Sent: Wednesday, August 10, 2011 11:47 AM
> To: squid-users_at_squid-cache.org
> Subject: Re: [squid-users] RE: Too Many Open File Descriptors
>
> On Tue, 09 Aug 2011 23:07:05 -0400, Wilson Hernandez wrote:
>> That used to happen to us and had to write a script to start squid
>> like this:
>>
>> #!/bin/sh -e
>> #
>>
>> echo "Starting squid..."
>>
>> ulimit -HSn 65536
>> sleep 1
>> /usr/local/squid/sbin/squid
>>
>> echo "Done......"
>>
>>
>
> Pretty much the only solution.
>
> ICAP raises the potential worst-case socket consumption per client
> request from 3 FD to 7. REQMOD also doubles the minimum resource
> consumption from 1 FD to 2.
>
> Amos
>
>>
>> On 8/9/2011 10:47 PM, Justin Lawler wrote:
>>> Hi,
>>>
>>> We have two instances of squid (3.0.15) running on a solaris box.
>>> Every so often (like many once every month) we get a load of below
>>> errors:
>>>
>>> "2011/08/09 19:22:10| comm_open: socket failure: (24) Too many open
>>> files"
>>>
>>> Sometimes it goes away of its own, sometimes squid crashes and
>>> restarts.
>>>
>>> When it happens, generally happens on both instances of squid on
>>> the
>>> same box.
>>>
>>> We have number open file descriptors set to 2048 - using
>>> squidclient
>>> mrg:info:
>>>
>>> root_at_squid01# squidclient mgr:info | grep file
>>> Maximum number of file descriptors: 2048
>>> Largest file desc currently in use: 2041
>>> Number of file desc currently in use: 1903
>>> Available number of file descriptors: 138
>>> Reserved number of file descriptors: 100
>>> Store Disk files open: 68
>>>
>>> We're using squid as an ICAP client. Both squid instances point two
>>> different ICAP servers, so it's unlikely a problem with the ICAP
>>> server.
>>>
>>> Is this a known issue? As its going on for a long time (over 40
>>> minutes continuously), it doesn't seem like it's just the traffic
>>> spiking for a long period. Also, we're not seeing it on other
>>> boxes,
>>> which are load balanced.
>>>
>>> Any pointers much appreciated.
>>>
>>> Regards,
>>> Justin
>>> This message and the information contained herein is proprietary
>>> and
>>> confidential and subject to the Amdocs policy statement,
>>> you may review at http://www.amdocs.com/email_disclaimer.asp
>>>
>
>
> This message and the information contained herein is proprietary and
> confidential and subject to the Amdocs policy statement,
> you may review at http://www.amdocs.com/email_disclaimer.asp


This message and the information contained herein is proprietary and confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp
Received on Wed Aug 10 2011 - 09:12:39 MDT

This archive was generated by hypermail 2.2.0 : Wed Aug 10 2011 - 12:00:01 MDT