Re: Max simultaneous connections limit on per-destination basis

From: Henrik Nordstrom <hno@dont-contact.us>
Date: Wed, 2 Nov 2005 17:06:25 +0100 (CET)

On Wed, 2 Nov 2005, Radu Rendec wrote:

> Usage scenario: squid set up as httpd accelerator, with many virtual
> hosts on the accelerated servers. If the number of simultaneous
> connections is limited in the httpd servers and many simultaneous
> requests come to squid, it will end up filling its fd table waiting for
> the httpd servers to accept new connections.

One simple solution would be to connect to the backend servers using
cache_peer, which happens to already have an option to limit the maximum
number of concurrent requests.. (max-conn option)

In future versions of Squid cache_peer is the absolutely preferred method
of forwarding requests in an accelerator.

Regards
Henrik
Received on Wed Nov 02 2005 - 09:06:28 MST

This archive was generated by hypermail pre-2.1.9 : Thu Dec 01 2005 - 12:00:15 MST