Re: Paying for someone else's traffic?

From: Henrik Nordstrom <>
Date: Mon, 02 Mar 1998 22:30:27 +0100

David J N Begley wrote:

> Understood. Given the dearth of complaints we've seen around
> here lately, I can only assume that this race condition is
> pretty rare, no?
> (Though of course just relying on that fact alone isn't a "fix".)

Actually there is two race conditions.

1. The object expires between the ICP HIT and the resulting HTTP query.
Squid tries to minimize this by using a 30 seconds delta when checking
staleness on ICP queries (don't report a HIT for a object that expires
in 30 seconds).
2. The object gets purged from the cache. This can happen if your cache
is full, or if a local user does a request that invalidates the cached

There will always be some rare occasions where one of these race
conditions does occur, triggering your cache to validate/fetch the
object from the origin server.

How common these are can easily be detected by simple log file
processing. 1 is TCP_REFRESH_xxx, 2 is TCP_MISS (and very rarely

If your cache (or the neigbour) is overloaded, then the 30 seconds delta
might not be enought. There is currently no way to tune this without
hacking in the source (icp.c:icpCheckUdpHit).

In squid.conf configuration parameter that affects this, namely
icp_hit_stale. If this is set to on then your cache reports a ICP HIT
even if the object has expired (defaults to off). The effect of this is
that your neighbours will trigger a lot of refresh operations in your

Calculating net losses from TCP_REFRESH_xxx operations triggered by
neighbous are a bit tricky. If you have one single local TCP_HIT for the
same object after the refresh then you have not lost a single bit as the
refresh would have been done anyway (remember: TCP_REFRESH_xxx is NOT
user-forced refreshes, it is your cache validating stale/expired

Henrik Nordström
Sparetime Squid Hacker
Received on Mon Mar 02 1998 - 14:02:16 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:39:07 MST