Re: [squid-users] Anyone considered this potential feature?

From: Michael Sparks <>
Date: Fri, 23 Oct 1998 17:28:40 +0000

On Fri, 23 Oct 1998, Chris Tilbury wrote:
> > First ensure Squid2 is running on all the machines, then for each request:
> >
> > X = result of Squid URL hash for it's digest.
> > Y = X mapped over N machines, selecting 1. (ala hash value mod N)
> > IF Y=0 then go direct
> > ELSE get page from sibling Y (unless sibling fails, then go direct)
> Sounds like a peer version of CARP to me. Is that the general idea?

I've seen the CARP compliation option, and seen carp.c, but no
documentation on it... The name sounds promising however...

> Hmm. Do you forsee transmitting the cache digests as being a problem?

Not really, it's just that this would be a useful way of increasing the
cluster hit rate. It uses squid to turn every machine in the cluster
an FEP for the whole cluster. At the moment it looks like our best bet
delivering better service times is to increase the hit rate.

The side effect of using this to increase our hot rate is that we would
manage to do this without being forced to rely on cache digest lookups.
host Y is always responsible for hash(URL) mod N then there's no point
doing a lookup.

No lookup & no ICP :-)

External support of digest becomes slightly trickier to handle, but
a secondary issue at this stage. (And worth implementing later - to
incoming ICP traffic)

> From
> the specs, it doesn't sound as if they would impose an enormous load, and
> you can turn off ICP querying of sibling caches in the config file, so you
> don't need all that ICP traffic.

>From what I've read/seen, Cache digest doesn't seem to create a huge
amount of traffic, but we'd like to minimise all intra-cluster comms to
the bare minimum.

> I'm no expert, but I think this wouldn't be such a difficult thing to
> implement.

>From what I can see looking at the source, the main function where this
could be hooked into is the src/CacheDigest.c::neighboursDigestSelect
function and then some support in the config file. (Not necessarily the
best place, but it's definitely *a* place)

> It does sound very much like CARP and most of the code for that
> is already there.

Hmm I'll look into that as well...

> Maybe you could have multiple FEP systems, though, perhaps on a DNS RR
> basis. Squid is fairly good at detecting when a parent is down, after all,
> so your children (us! :-) shouldn't suffer if one of the FEP systems went
> away. Or have you already discounted this?

In this situation every cache machine in the cluster running that
of squid would act as an FEP, so as far as the clients were concerned
wouldn't notice any change, except (hopefully!) a noticeable performance
boost. And since every machine would be acting as an FEP, there's no
single point of failure. If CARP delivers this (or is close to this)
this would be useful.


National & Local Web Cache Support        R: G95c
Manchester Computing                      E:
University of Manchester                  T: 0161 275 7195
Manchester UK M13 9PL                     F: 0161 275 6040
Received on Fri Oct 23 1998 - 09:30:38 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:42:45 MST