Re: [squid-users] your cache is Running out of filedescriptors

From: Henrik Nordstrom <hno@dont-contact.us>
Date: 24 Mar 2003 20:36:44 +0100

Note: Unless you are processing about 100request/s or more there should
be no need to increase the filedescriptor limit beyond the default 1024.

If you are processing significantly less requests/s and still runs out
of filedescriptor then the actual bottleneck is elsewhere, usually in
disk I/O causing Squid to not being able to keep up.

Last time I looked OSX could only support the "ufs" cache_dir type,
which serverely limits the maximum request rate unless you can afford to
effectively run the whole cache in a ram disk..

For proxy request rates beyond 30-50 requests/s you need to investigate
the asynchronous cache_dir types (aufs or diskd).

Regards
Henrik

mån 2003-03-24 klockan 16.08 skrev Jeff Donovan:
> greetings
> Can someone give me an explanation on this error. I understand
> (limited) that is has something to do with the OS reaching some limit.
>
> Currently I am running
> Squid 2.5 Stable1
> cache mem = 256mb
> cache size = 16gb
>
> SquidGuard 1.2.0
> BekeleyDB 2.7.7
> OSX 10.2.4 server
> dual 1ghz PowerPC G4
> Memory 2 GB
>
> just before the file descriptor error i receive these notices ;
>
> parsehttpRequest ; requestheader contains NULL characters
> ClientReadRequest : FD {somenumber} Invalid request
> WARNING! Your cache is running out of filedescriptors
>
> Any insight?
>
> --jeff
>

-- 
Henrik Nordstrom <hno@squid-cache.org>
MARA Systems AB, Sweden
Received on Mon Mar 24 2003 - 12:36:56 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:14:19 MST