Re: Max simultaneous connections limit on per-destination basis

From: Radu Rendec <radu.rendec@dont-contact.us>
Date: Thu, 03 Nov 2005 19:25:11 +0200

Hi!

Henrik, thank you so much for taking the time to answer my question!
Most likely cache_peer is the way I should go (I have already found some
documentation in the wiki and I'll move to squid-users for further help
with this). But completely changing the setup is not very easy to do -
there are more than 300 sites behind that squid machine and I cannot
afford to change things without propper testing.

Until I completely migrate to cache_peer, I'd like to make my patch
work. I've recently found out that my database fills up with "ghost"
connections and I suspect that I put my release hook in the wrong place.

What I did was to add two hooks in client_side.c - one to increase the
number of concurrent connections to the destination domain, and one to
"release" the connection (decrease the number in the database).

The increase hook is placed in clientReadRequest(), right before the
call to clientAccessCheck() (in the 2.4 tree there's only one call,
although there are several calls in the 2.5 tree).

The decrease hook is placed in httpRequestFree(), right after the
following two lines:
clientUpdateCounters(http);
clientdbUpdate(conn->peer.sin_addr, http->log_type, PROTO_HTTP,
        http->out.size);

Is there any chance I could miss a new request or the end of a request?

Thanks,

Radu Rendec

On Wed, 2005-11-02 at 17:06 +0100, Henrik Nordstrom wrote:
> One simple solution would be to connect to the backend servers using
> cache_peer, which happens to already have an option to limit the maximum
> number of concurrent requests.. (max-conn option)
>
> In future versions of Squid cache_peer is the absolutely preferred method
> of forwarding requests in an accelerator.
>
> Regards
> Henrik
Received on Thu Nov 03 2005 - 12:58:51 MST

This archive was generated by hypermail pre-2.1.9 : Thu Dec 01 2005 - 12:00:15 MST