Re: [squid-users] Recurrent crashes and warnings: "Your cache is running out of filedescriptors"

From: Fred B <fredbmail_at_free.fr>
Date: Tue, 11 Oct 2011 15:38:44 +0200 (CEST)

----- "Leonardo" <leonardodiserpierodavinci_at_gmail.com> a écrit :

> Hi all,
>
> I'm running a transparent Squid proxy on a Linux Debian 5.0.5,
> configured as a bridge. The proxy serves a few thousands of users
> daily. It uses Squirm for URL rewriting, and (since 6 weeks) sarg
> for
> generating reports. I compiled it from source.
> This is the output of squid -v:
> Squid Cache: Version 3.1.7
> configure options: '--enable-linux-netfilter' '--enable-wccp'
> '--prefix=/usr' '--localstatedir=/var' '--libexecdir=/lib/squid'
> '--srcdir=.' '--datadir=/share/squid' '--sysconfdir=/etc/squid'
> 'CPPFLAGS=-I../libltdl' --with-squid=/root/squid-3.1.7
> --enable-ltdl-convenience
> I set squid.conf to allocate 10Gb of disk cache:
> cache_dir ufs /var/cache 10000 16 256
>
>
> Everything worked fine for almost one year, but now suddenly I keep
> having problems.
>
>
> Recently Squid crashed and I had to delete swap.state.
>
>
> Now I keep seeing this warning message on cache.log and on console:
> client_side.cc(2977) okToAccept: WARNING! Your cache is running out
> of
> filedescriptors
>
> At OS level, /proc/sys/fs/file-max reports 314446.
> squidclient mgr:info reports 1024 as the max number of file
> descriptors.
> I've tried both to set SQUID_MAXFD=4096 on etc/default/squid and
> max_filedescriptors 4096 on squid.conf but neither was successful.
> Do
> I really have to recompile Squid to increase the max number of FDs?
>
>
> Today Squid crashed again, and when I tried to relaunch it it gave
> this output:
>
> 2011/10/11 11:18:29| Process ID 28264
> 2011/10/11 11:18:29| With 1024 file descriptors available
> 2011/10/11 11:18:29| Initializing IP Cache...
> 2011/10/11 11:18:29| DNS Socket created at [::], FD 5
> 2011/10/11 11:18:29| DNS Socket created at 0.0.0.0, FD 6
> (...)
> 2011/10/11 11:18:29| helperOpenServers: Starting 40/40 'squirm'
> processes
> 2011/10/11 11:18:39| Unlinkd pipe opened on FD 91
> 2011/10/11 11:18:39| Store logging disabled
> 2011/10/11 11:18:39| Swap maxSize 10240000 + 262144 KB, estimated
> 807857 objects
> 2011/10/11 11:18:39| Target number of buckets: 40392
> 2011/10/11 11:18:39| Using 65536 Store buckets
> 2011/10/11 11:18:39| Max Mem size: 262144 KB
> 2011/10/11 11:18:39| Max Swap size: 10240000 KB
> 2011/10/11 11:18:39| /var/cache/swap.state.new: (28) No space left on
> device
> FATAL: storeDirOpenTmpSwapLog: Failed to open swap log.
>
> I therefore deactivated the cache and rerun Squid. It showed a long
> list of errors of this type:
> IpIntercept.cc(137) NetfilterInterception: NF
> getsockopt(SO_ORIGINAL_DST) failed on FD 10: (2) No such file or
> directory
> and then started. Now Squid is running and serving requests, albeit
> without caching. However, I keep seeing the same error:
> client_side.cc(2977) okToAccept: WARNING! Your cache is running out
> of
> filedescriptors
>
> What is the reason of this since I'm not using caching at all?
>
>
> Thanks a lot if you can shed some light on this.
> Best regards,
>
>
> Leonardo

Hi,

For 2011/10/11 11:18:29| With 1024 file descriptors available

Before compilation

ulimit -n 65536
And rebuild with '--with-filedescriptors=65536'
 
If it's not enough add ulimit -n 48000 in /etc/init.d/squid

For 2011/10/11 11:18:39| /var/cache/swap.state.new: (28) No space left on device

Could you post the result of df -h /var
Received on Tue Oct 11 2011 - 13:39:01 MDT

This archive was generated by hypermail 2.2.0 : Tue Oct 11 2011 - 12:00:04 MDT