Re: Upper limit on concurrent connections?

From: Dancer <dancer@dont-contact.us>
Date: Thu, 15 Oct 1998 17:12:27 +1000

I spoke to a couple of the kernel folks about this one a while back. It
was suggested that as the number of file-descriptors available
increased, 'suckiness' would also increase...pretty much due to the
overhead involved with dealing with larger and larger fd_sets.

Every extra descriptor contributes a little overhead (well, let's be
pedantic...every _eight_ of them). Exactly how much impact this has
depends on the number of cpu-cycles you have available. On a really slow
machine, even 3000 fd's is a bit of an ask...while on 'gruntier' boxen,
you can handle 12000 or more without blinking.

D

Simon Rainey wrote:
> Does anyone have experience of Squid running with large numbers of
> concurrent client connections? Our caches are regularly seeing 1000 - 1500
> concurrent connections and have around 4000 FDs in use. These numbers are
> increasing with time as our user base grows. We're running Squid 1.1.22
> under Linux with the filehandle patch. I can continue to increase the
> appropriate limits, but is there a practical limit to the number of
> concurrent connections before performance starts to suffer (given that disk
> I/O isn't a bottleneck)?
>
> Thanks,
> Simon.
>
> -------------------------------------------------------------------------
> Simon Rainey Direct Line: 01235 823238
> Principal Internet Development Engineer Fax: 01235 823424
> RM Internet for Learning E-mail: srainey@rmplc.net
> New Mill House, 183 Milton Park, Abingdon, Oxfordshire, OX14 4SE, England
Received on Thu Oct 15 1998 - 00:59:47 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:42:30 MST