Re: Sharing DNS cache among Squid workers

From: Amos Jeffries <squid3_at_treenet.co.nz>
Date: Wed, 26 Jan 2011 01:20:54 +0000

On Tue, 25 Jan 2011 17:58:36 -0700, Alex Rousskov
<rousskov_at_measurement-factory.com> wrote:
> On 01/13/2011 03:54 PM, Amos Jeffries wrote:
>> On 14/01/11 11:20, Robert Collins wrote:
>>> On Fri, Jan 14, 2011 at 11:13 AM, Alex Rousskov
>>> <rousskov_at_measurement-factory.com> wrote:
>>>> On 01/13/2011 02:18 PM, Robert Collins wrote:
>>>>> Have you considered just having a caching-only local DNS server
>>>>> colocated on the same machine?
>>>>
>>>> I am sure that would be an appropriate solution in some environments.
>>>> On
>>>> the other hand, sometimes the box has no capacity for another server.
>>>> Sometimes the traffic from 8-16 Squids can be too much for a single
DNS
>>>> server to handle. And sometimes administration/policy issues would
>>>> prevent using external caching DNS servers on the Squid box.
>>>
>>> This surprises me - surely the CPU load for a dedicated caching DNS
>>> server is equivalent to the CPU load for squid maintaining a DNS cache
>>> itself; and DNS servers are also multithreaded?
>>>
>>> Anyhow, I've no particular objection to it being in the code base, but
>>> it does seem like something we'd get better results by not doing (or
>>> having a defined IPC mechanism to a single (possibly multi-core) cache
>>> process which isn't a 'squid'. [Even if it is compiled in the squid
>>> tree].
>>>
>>> -Rob
>>
>> Thats pretty much my opinion too.
>>
>> A shared resolver on the same box where possible is our best-practice
>> anyway. Where DNS speed is important users have their DNS as close as
>> possible to the Squid.
>>
>> It maybe worthwhile instead researching the lightest available DNS
>> resolver and using that as a recommendation to assist people.
>>
>> When the workers are doing shared memory blocks merging these caches
>> would be worthwhile to de-duplicate the entries. But until then its
just
>> adding complexity.
>
> If we implement DNS cache sharing among workers, then the shared caches
> will share entries (using shared memory), of course. Or did you mean
> something else by "doing shared memory blocks"?

Yes. "mutually accessible memory" / "shared memory" where one writes and
all can read from the same memory space was what I meant.

I thought you were proposing to do it on top of the existing IPC channels.
*copying* data around.

If you do implement this shared memory backend, it is probably worth doing
it as a generic cache which can be leveraged via inheritance from both
ipcache/fqdncache and the helper caches etc, etc. They all have a
read-often write-rarely structure with small key/value data records.

Amos
Received on Wed Jan 26 2011 - 01:21:00 MST

This archive was generated by hypermail 2.2.0 : Wed Jan 26 2011 - 12:00:05 MST