Re: [squid-users] Choose one worker

From: Alfredo Rezinovsky <alfredo_at_fing.uncu.edu.ar>
Date: Wed, 28 Aug 2013 10:15:16 -0300

El 28/08/13 02:23, Amos Jeffries escribió:
> On 28/08/2013 2:19 p.m., Alfredo Rezinovsky wrote:
>> El 27/08/13 22:43, Alfredo Rezinovsky escribió:
>>> I have a high load servers and need to use workers or else one CPU
>>> core climbs to 100% usage and I see a slow down in the network.
>>>
>>> There's a way to choose a worker for a single specific request?
>>> I have a script and I need to make a request knowing wich worker
>>> will answer it.
>
> That is an operating system feature. I believe it is supposed to be
> evenly spread randomly over the workers, although there is evidence
> that the OS often makes mistakes and Squid has some rotation hacks to
> balance it a bit better.
>
>>>
>>> I've seen both the coordinator and all the workers listening in TCP
>>> 3128 using lsof. This is very confusing.
>
> Only the workers are listen()'ing nd accept()'ing the incoming
> connections though. The coordinator is "listening" there in order to
> be able to pass the open socket details to workers as they start up.
> lsof is not presenting a true picture of usage for each sockt, just
> what is process is *able* to use it for any use - even close().
>
>
>>>
>>> --
>>> Alfrenovsky
>>>
>>>
>> Answering to myself. hope useful to others...
>>
>> workers 2
>> http_port 3128
>> http_port 3100${process_number}
>>
>> This way I can use port 31001 for the 1st worker and 31002 for the 2nd.
>>
>
> NOTE: this still leaves several problems:
>
> 1) HTTP protocol contains a persistent connection feature (aka
> "keep-alive") where multiple requests are sent on one connection. It
> is handling the request count which overloads the worker, not the
> connection count.
>
> 2) you are now having to write up something explicitly to replicate
> the balancing functionality which is built into the kernel.
The balancing works fine enough for me. What I needed is to "break" the
balancing and talk to a specific worker i choose.
Received on Wed Aug 28 2013 - 13:15:34 MDT

This archive was generated by hypermail 2.2.0 : Wed Aug 28 2013 - 12:00:15 MDT