[Fwd: Fwd: Re: Squid DOS on accelerated boxes]

From: Henrik Nordstrom <hno@dont-contact.us>
Date: Wed, 16 May 2001 23:14:16 +0200

Something to chew on if anyone is interested..

--
Henrik

attached mail follows:


In fact to take this a step further, perhaps a limiting parameter which
limited how many FD's could be allocated to a specific site would
work? The nasty thing about what I found out is that all 970 or so FD's
were all to the same site, say a limit of 300 to one specific site would at
least avert the problem...and allow squid to keep working. I would have
noticed heaps of FD's in use (if not noticed it in the cache.log) and
looked closer if I had seen it coming.

Then again, I am not a coder, and I don't know how easy or hard this would
be to write, let alone the performance loss on squid checking the FD usage...

Reuben

>To: Henrik Nordstrom <hno@hem.passagen.se>
>From: Reuben Farrelly <reuben-squid-dev@reub.net>
>Subject: Re: Squid DOS on accelerated boxes
>
>The practice of 950+ file descriptors being chewed up in about 3 seconds
>could ring alarm bells though....it's not really something that an
>ordinary cache would do :-)
>
>reuben
>
>
>At 01:05 PM 14/05/2001 +0200, you wrote:
>>Good. No need to worry then, but still will look into if things can be
>>improved to not crash when it happens... ideas for good approaches on
>>how to detect a loop with no information are welcome (using source IP is
>>not really a good alternative).
>
>-------------------------------------------------------------
>Reuben Farrelly West Ryde, NSW 2114, Australia
Received on Wed May 16 2001 - 16:08:27 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:14:01 MST