Re: [squid-users] Ongoing Running out of filedescriptors

From: Amos Jeffries <squid3_at_treenet.co.nz>
Date: Wed, 10 Feb 2010 14:34:13 +1300

On Tue, 9 Feb 2010 17:39:37 -0600, Luis Daniel Lucio Quiroz
<luis.daniel.lucio_at_gmail.com> wrote:
> Le Mardi 9 Février 2010 17:29:23, Landy Landy a écrit :
>> I don't know what to do with my current squid, I even upgraded to
>> 3.0.STABLE21 but, the problem persist every three days:
>>
>> /usr/local/squid/sbin/squid -v
>> Squid Cache: Version 3.0.STABLE21
>> configure options: '--prefix=/usr/local/squid'
'--sysconfdir=/etc/squid'
>> '--enable-delay-pools' '--enable-kill-parent-hack' '--disable-htcp'
>> '--enable-default-err-language=Spanish' '--enable-linux-netfilter'
>> '--disable-ident-lookups' '--localstatedir=/var/log/squid3.1'
>> '--enable-stacktraces' '--with-default-user=proxy' '--with-large-files'
>> '--enable-icap-client' '--enable-async-io' '--enable-storeio=aufs'
>> '--enable-removal-policies=heap,lru' '--with-maxfd=32768'
>>
>> I built with --with-maxfd=32768 option but, when squid is started it
says
>> is working with only 1024 filedescriptor.
>>
>> I even added the following to the squid.conf:
>>
>> max_open_disk_fds 0
>>
>> But it hasn't resolve anything. I'm using squid on Debian Lenny. I
don't
>> know what to do. Here's part of cache.log:
>>
<snip logs>
>
>
> You got a bug! that behaivor happens when a coredump occurs in squid,
> please
> file a ticket with gdb output, rice debug at maximum if you can.

WTF are you talking about Luis? None of the above problems have anything
to do with crashing Squid.

They are in order:

"WARNING! Your cache is running out of filedescriptors"
 * either the system limits being set too low during run-time operation.
 * or the system limits were too small during the configure and build
process.
   -> Squid may drop new client connections to maintain lower than desired
traffic levels.

  NP: patching the kernel headers to artificially trick squid into
believing the kernel supports more by default than it does is not a good
solution. The ulimit utility exists for that purpose instead.
<snip kernel patch>

"Unsupported method attempted by 172.16.100.83"
 * The machine at 172.16.100.83 is pushing non-HTTP data into Squid.
  -> Squid will drop these connections.

"clientNatLookup: NF getsockopt(SO_ORIGINAL_DST) failed: (2) No such file
or directory"
 * NAT interception is failing to locate the NAT table entries for some
client connection.
 * usually due to configuring the same port with "transparent" option and
regular traffic.
 -> for now Squid will treat these connections as if the directly
connecting box was the real client. This WILL change in some near future
release.

As you can see in none of those handling operations does squid crash or
core dump.

Amos
Received on Wed Feb 10 2010 - 01:34:42 MST

This archive was generated by hypermail 2.2.0 : Wed Feb 10 2010 - 12:00:05 MST