Re: threading idea

From: Henrik Nordstrom <hno@dont-contact.us>
Date: Sat, 17 Mar 2001 11:17:25 +0100

I am not considering making Squid multi-threaded. I am considering
making it multi-process designed.

No, the "thundering herd" problem does not really apply to Squid. It
applies where you have a large bunch of processes all listening to the
same port (i.e. ala Apache), but in this case we will only have a small
number of processes, equal to the amount of CPU's available. It does not
matter if all of them wake up when there is one or more new connection
request.

What gets a bit tricky is how to evenly distribute the request load
amongst the CPUs and perhaps how to avoid kernel-level congestion on the
listen port (if there is such a thing on non-blocking listen ports), but
I am confident a nice design for distributing the connection requests
can be found.

/Henrik

Robert Collins wrote:
>
> I don't think this has been proposed before but if it has just point me
> at the archives...
>
> Alter squid to have a worker thread for each cpu, and a control thread
> which handles the select loop, logging, and adding items to the work
> queue.
>
> The core idea being that when 2 or more requests would be acted upon
> from the current core loop, they can be processed in parallel, with the
> same non-blocking request processing logic as today.
>
> Issues:
> Some of squid is non-reentrant, and much non thread safe (ie mutex would
> be needed on global variables, or data structures that could be accessed
> in parallel.
>
> benefits:
> Should be able to more effectively utilise dual processor machines.
>
> Thoughts?
>
> Rob
Received on Sat Mar 17 2001 - 03:27:21 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:13:38 MST