[squid-users] Re: WARNING! Your cache is running out of filedescriptors

From: Joe Cooper <joe@dont-contact.us>
Date: Mon, 01 Apr 2002 02:41:24 -0600

First, a tip on posting to a mailing list: If the contents of a log, or
similar, is longer than 30 lines or so, it is most polite to clip just
the most valid part, or if you can't tell the most valid parts post it
to a website and provide a link. In this case, you recognize the most
important message, as it was in your subject--so a few lines of that
would have been sufficient to ask this question, I think.

Now on to suggestions...

Use the latest STABLE Squid version. There is a reason the guys keep
putting out new versions...they are better than old versions. It also
makes it easier for everyone to help everyone if we're all using the
same Squid version.

It sounds like either you have too much traffic for your Squid, or
something in your environment is tying up file descriptors leading to an
overload condition. When Squid runs out of file descriptors it must
stop accepting requests (it doesn't have any 'slots' to put the requests
into in the select or poll loop).

Now, to answer your primary question: I'm going to assume the box is big
enough for your network load. The reason for file descriptors to run
out is for some aspect of the network to be broken, so requests just
hang. Maybe it is a DNS problem. How fast (or slow) are DNS queries
coming back? Is there any abnormal latency when requesting objects from
the Squid box (i.e. using Lynx or wget on the Squid machine itself)?

The other stuff:

parseHttpRequest: Unsupported method 'BPROPPATCH'

This error just indicates an unsupported method--exactly what it says.
It can safely be ignored for the time being (and when you upgrade to the
latest Squid, I think that will disappear...or maybe the fix comes in
2.5...if so, do a search on the mailing list archives for
"extension_methods" and learn lots of interesting stuff).

idnsCheckQueue: ID 1e3a: giving up after 20 tries and
11.4 seconds

Your DNS server didn't answer a lookup request after 20 tries and 11.4
seconds. Has nothing to do with Squid...may indicate a problem with
your DNS server.

sslReadServer: FD 122: read failure: (104) Connection
reset by peer

The client probably cancelled the request. No big deal. Theoretically
could indicate a bug in Squid, but very unlikely. Besides, nobody would
care if it were a bug--you're using an old Squid version. No one is
fixing bugs in 2.3 anymore.

Good luck!

squid_nehal@speednetindia.com wrote:
> Respected sir,
>
> I m using linux 2.2.16-22smp linux 7.0 and squid 2.3stable4. I have
> around 200 clients.
>
> Now the problem is :- when i start my server there is no problem for 3
> hours but after 3 or 4 hours my lease line provider says me that your
> utilization going down. And my all clients complain me that the speed is
> very much slow. At this time i have to restart my squid service and it
> comes back means all things run normaly.
>
> I check my cache.log and found following...

-- 
Joe Cooper <joe@swelltech.com>
http://www.swelltech.com
Web Caching Appliances and Support
Received on Mon Apr 01 2002 - 01:43:46 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:07:17 MST