Re: [RFC] Squid process model and service name impact

From: Alex Rousskov <rousskov_at_measurement-factory.com>
Date: Wed, 29 Jan 2014 18:29:18 -0700

On 01/29/2014 04:03 PM, Amos Jeffries wrote:
> On 2014-01-30 06:55, Alex Rousskov wrote:
>> On 01/29/2014 02:00 AM, Amos Jeffries wrote:
>>
>>> Lets assume for a minute or ten that we all agree with this design and
>>> start implementing it ...
>>>
>>> A default Squid will need the following new directives:
>>>
>>> collapsed_forwarding_metadata_shm
>>> collapsed_forwarding_queues_shm
>>> collapsed_forwarding_readers_shm
>> ...
>>> * cache_dir needs to have;
>>> * a new option added to explicitly configure the path to the
>>> disker[...]
>>> * a new option to link to the shared-memory segment for readers, and
>>> * a new option to link to the shared-memory segment for writers.
>>>
>>>
>>> Then we get to the UDS sockets...
>>>
>>> # assuming that we optimize a bit with a param for the process number
>>> # for a standard 8-core box with 2 cores for OS & coordinator.
>>> workers 6
>>> cordinator_process_uds_path /path/to/coordinator/uds/socket.ipc
>>> kid_process_uds_path 1 /path/to/kid1/uds/socket.ipc
>>> kid_process_uds_path 2 /path/to/kid2/uds/socket.ipc
>>> kid_process_uds_path 3 /path/to/kid3/uds/wheeee.ipc
>>> kid_process_uds_path 4 /path/to/kid4/uds/socket.ipc
>>> kid_process_uds_path 5 /path/to/kid5/uds/socket.ipc
>>> kid_process_uds_path 6 /path/to/kid6/uds/haha.ipc
>>
>> The kid executable (e.g., disker, coordinator, worker, etc.) path is
>> already covered by the current Squid executable path.
>>
>> All others above (and more) can be covered with the following two
>> options, with reasonable defaults (which may include a service name
>> component), until we have a need for something more refined:
>>
>> shared_memory_dir
>> uds_dir
>>
>> Not bad, IMO!

> It is however the Y solution to an XYZ problem.
> .. unable to run two concurrent instances from same config file

Ability to run two concurrent instances using the same configuration
file does not sound like a reasonable goal/requirement to me. Has
anybody even asked for that? What was their motivation?? I know folks
want to run concurrent instances from the same Squid build, but using
the same squid.conf seems like a very very strange use case to me.

Bug 3608 does not mention the requirement to use the same squid.conf
AFAICT. In fact, most comments there mention using different Squid
configurations for different instances.

I was totally lost in the rest of the XYZ points, probably because I do
not understand the XYZ problem you are referring to.

> When used via squid.conf directive, "same config" implies "same path
> directive value". Ergo solution does not work to actually solve the core
> problem of multi-instance collisions. But simply adds yet another option
> on the nature of how/where the admin has to do to solve the real issue
> (squid.conf uds_dir/shared_memory_dir vs squid.conf chroot vs. rebuild
> with --prefix).

AFAIK, the "concurrent instances" are using different squid.conf for
each instance (because the instances are different or there would not be
a point in running many of them). Am I wrong?

> -n service name does also adds yet another option, but does solve the Z
> problem outstanding complaint of "same config file" without implying
> solution Y (chroot / base_root setup).

At the expense of moving configuration to the command line. If we add
--shared-memory-dir and such to the command line options, they would
work equally "well". But we should not do that: The command line should
either only contain stuff that cannot be configured via squid.conf
(another "ideal" that is difficult to prove) OR should accept any
squid.conf option (via some general --add-this-string syntax).

Cheers,

Alex.
Received on Thu Jan 30 2014 - 01:29:39 MST

This archive was generated by hypermail 2.2.0 : Thu Jan 30 2014 - 12:00:15 MST