Re: [squid-users] How get negative cache along with origin server error?

From: Henrik Nordstrom <henrik_at_henriknordstrom.net>
Date: Thu, 02 Oct 2008 12:31:03 +0200

By default Squid tries to use a parent 10 times before declaring it
dead.

Each time Squid retries a request it falls back on the next possible
path for forwarding the request. What that is depends on your
configuration. In normal forwarding without never_direct there usually
never is more than at most two selected active paths: Selected peer if
any + going direct. In accelerator mode or with never_direct more peers
is selected as candidates (one sibling, and all possible parents).

These retries happens on

* 504 Gateway Timeout (including local connection failure)
* 502 Bad gateway

or if retry_on_error is enabled also on

* 401 Forbidden
* 500 Server Error
* 501 Not Implemented
* 503 Service not available

Please note that there is a slight name confusion relating to max-stale.
Cache-Control: max-stale is not the same as the squid.conf directive.

Cache-Control: max-stale=N is a permissive request directive, saying
that responses up to the given staleness is accepted as fresh without
needing a cache validation. It's not defined for responses.

The squid.conf setting is a restrictive directive, placing an upper
limit on how stale content may be returned if cache validations fail.

The Cache-Control: stale-if-error response header is equivalent the
squid.conf max-stale setting, and overrides squid.conf.

The default for stale-if-error if not specified (and squid.conf
max-stale) is infinite.

Warning headers is not yet implemented by Squid. This is on the todo.

Regards
Henrik

On tis, 2008-09-30 at 10:32 -0500, Dave Dykstra wrote:
> Do any of the squid experts have any answers for this?
>
> - Dave
>
> On Thu, Sep 25, 2008 at 02:04:09PM -0500, Dave Dykstra wrote:
> > I am running squid on over a thousand computers that are filtering data
> > coming out of one of the particle collision detectors on the Large
> > Hadron Collider. There are two origin servers, and the application
> > layer is designed to try the second server if the local squid returns a
> > 5xx HTTP code (server error). I just recently found that before squid
> > 2.7 this could never happen because squid would just return stale data
> > if the origin server was down (more precisely, I've been testing with
> > the server up but the listener process down so it gets 'connection
> > refused'). In squid 2.7STABLE4, if squid.conf has 'max_stale 0' or if
> > the origin server sends 'Cache-Control: must-revalidate' then squid will
> > send a 504 Gateway Timeout error. Unfortunately, this timeout error
> > does not get cached, and it gets sent upstream every time no matter what
> > negative_ttl is set to. These squids are configured in a hierarchy
> > where each feeds 4 others so loading gets spread out, but the fact that
> > the error is not cached at all means that if the primary origin server
> > is down, the squids near the top of the hierarchy will get hammered with
> > hundreds of requests for the server that's down before every request
> > that succeeds from the second server.
> >
> > Any suggestions? Is the fact that negative_ttl doesn't work with
> > max_stale a bug, a missing feature, or an unfortunate interpretation of
> > the HTTP 1.1 spec?
> >
> > By the way, I had hoped that 'Cache-Control: max-stale=0' would work the
> > same as squid.conf's 'max_stale 0' but I never see an error come back
> > when the origin server is down; it returns stale data instead. I wonder
> > if that's intentional, a bug, or a missing feature. I also note that
> > the HTTP 1.1 spec says that there MUST be a Warning 110 (Response is
> > stale) header attached if stale data is returned and I'm not seeing
> > those.
> >
> > - Dave

Received on Thu Oct 02 2008 - 10:31:14 MDT

This archive was generated by hypermail 2.2.0 : Tue Oct 07 2008 - 12:00:03 MDT