Re: Run out of file descriptors

From: Dancer <dancer@dont-contact.us>
Date: Tue, 06 Jan 1998 13:39:27 +1000

I discovered that place. I funnel all of squid's fetches through another process,
effectively adding two descriptors to every miss. Between squid, the dnsservers, the
redirectors, and that, we must have brushed the 1024 limit. But then something
strange happened, and we spiralled out of control and lost all our ram and swap
besides. Squid does that under a couple of limited circumstances (which I avoid), and
may be doing so again. Trying to pin it down.

D

Alvin Starr wrote:

> On Mon, 5 Jan 1998, Henrik Nordstrom wrote:
>
> >
> > Dancer wrote:
> >
> > > The question on my mind is why did the _system_ choke, instead of just squid?
> >
> > Probably because you hit a system limit before the per process limit. I
> > am not sure that the problem in your case was out of filedescriptors.
> > Squid running out of filedescriptors may have triggered it, but not
> > caused the system to choke. maybe you ran out of memory as well?
>
> We had a similar problem with squid on a busy site where we were using it
> for http-acelleration. The max number of fd's in 2.0.x linux is 1024. The
> only problem seems to be that there is someplace in the OS that is using
> fd's so that at before you hit the real limit you end up hitting an
> internal limit. by bumping up the number of fd's to 4k the problem went
> away since we now never get close to the magical limit.
>
> Alvin Starr || voice: (416)493-3325
> Interlink Connectivity || fax: (416)493-7974
> alvin@iplink.net ||

--
Note to evil sorcerers and mad scientists: don't ever, ever summon powerful
demons or rip holes in the fabric of space and time. It's never a good idea.
ICQ UIN: 3225440
Received on Mon Jan 05 1998 - 19:40:20 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:38:18 MST