Re: [squid-users] Managing clusters of siblings (squid2.7)

From: Chris Hostetter <hossman_squid_at_fucit.org>
Date: Tue, 29 Sep 2009 11:31:09 -0700 (PDT)

: Another solution could be to use a multi-level CARP config, which incidentally
: scales far better horizontally than ICP/HTCP, as it eliminates the iterative
: "sideways" queries altogether by hashing URLs to parent cache_peers. In this
        ...
: different IP or TCP port that actually does the caching. This solves your
: issue by giving every edge instance the same list of parent cache_peers - it

I briefly considered an idea very similar to this (using a hashing
feature of the load balancers designed for session affinity in
place of any peering in squid) but ruled it out fairly early on because it
would mean that any modifications to the cluster (adding or removing a
machine) would immediately drop the cache hit rate of the entire cluster
-- the only hits would be where the hashcode for a URL resulted in the
same number for N as it did for N +/- 1.

Is there a feature of CARP that can be used to mitigate this problem?

I see that a load-factor can be set per parent, so it seems like it might
be possible to mitigate the removal of a machie in the short term by
doubling the load factor on another peer (the one before it in the config
i would assume?) so that the hash function still picked the same machines
for all existing cached URLs -- but that seem's like it could only
be used as a short term workarround to an urgent outage, I'm not sure how
you would apply the same idea to adding/removing machines periodicly based
on need.

-Hoss
Received on Tue Sep 29 2009 - 18:31:11 MDT

This archive was generated by hypermail 2.2.0 : Wed Sep 30 2009 - 12:00:03 MDT