Still not sure how this works...

From: Graham Toal <>
Date: Tue, 8 Apr 1997 15:17:07 -0500 (CDT)

> You may set the inside_firewall option to none so that all requests are
> resolved through neighbor (siblings and parents) proxies.i... but
> this way your top level proxy would become a single point of failure since
> your second level proxies won't fetch objects directly if its parents are
> down... Without that squid sometimes still fetches objects directly even though
> source_ping is off and parents are weighted with are very high values...
> How come Duane ?

This is tangetially related to my question of a couple of weeks ago which
no-one answered at the time; I'll take this weak excuse to ask again :-)
Perhaps the explanations for each might illuminate the other.

From what I've read since on the list, I *think* the behaviour I want falls
out in the wash, but I'd like to hear it confirmed by someone (Duane?) before
I ask my boss to cough up the money for another Pentium and big disk...



Date: Fri, 21 Mar 1997 12:55:13 -0600 (CST)
From: Graham Toal <>
Subject: backing off when parent can't get page?

            ----+-------------- INTERNET --------------------+----
                | |
              link A (Mexico) link B (USA)
                | |
                | |
                V V
         +-------------+ +------------+
         | | | |
         | | | |
         | Slave cache | <--- fast internal link ----> | Main cache |
         | A | (T1 radio) | B |
         | | | |
         +-------------+ +------------+
                ^ ^
                | |
                | |
             A clients B clients

Here's the situation I have: I want to be able to set up cacheing
such that B clients always get their pages on link B except when
link B is down, in which case they'll switch to link A.

I want slave cache A to always pass requests to Main cache B if
Cache B is actually fulfilling them, but to use link A directly
if it is not. Note I'm worried about Cache B not fulfilling
requests because link B is down, not because Cache B is offline.

At the moment Cache B is a squid cache and cache A is a dumb CERN
proxy, and neither system has the graceful backoff I'd like in case
of a link outage on either link. Is there a way to use squid
to get the desired effect? I think from the docs that if Cache A
is the child and cache B is the parent, it might work, but the
docs were ambiguous about when the direct link is used in preference
to the parent - does it make a difference if the outage is because
the parent's link is down versus the parent itself being down?

The next question is, is the same thing possible in a symetrical
setup? A clients always use link a, b clients always use link b,
except when either link A or link B is down in which case they
automatically get their pages via the other cache? (Is this a
sibling cache?) My understanding of how squid works is that
if the page is in the sibling cache it will be returned, otherwise
it will be fetched directly. I don't know that if the direct fetch
fails, there's a way to go back to the sibling cache and demand that
it fetches the page (like a parent cache). But I'm new at squid
stuff and it looks complex and powerful enough that there may be a way
to get what I want with sufficient clever tweaking of the options.

(Btw I know that a better way to do this is use load-balancing routing
protocols, but that's not an option here)


PS Note A clients have no direct path to cache B and vice-versa.
Both sets of clients must use their own assigned cache.
Received on Tue Apr 08 1997 - 13:26:37 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:34:58 MST