Re: Stopping Netscape 4 Bookmarks from reloading pages

From: <>
Date: Tue, 28 Apr 1998 01:48:30 +1000 (EST)

> I apologise in advance if this covered in the Squid docs. I have
> looked.

Actually, it's not covered. Well done, for looking, though.

> We have been using squid happily for at least 18 months to serve a few
> local machines from machine with a dialup (ISDN) connection. Accessing
> a page using a bookmark with Netscape 3 will just fetch the page from
> the cache when not connected to the outside world.
> However, accessing a page through the same bookmarks using Netscape 4
> always fails with DNS lookup failure when not connected to the outside
> world. It seems to me that Netscape 4 is fascistly forcing the page to
> be reloaded when it is accessed via a bookmark. Just clicking on a
> normal URL works OK though, i.e. a page already in the cache can be
> retrieved when offline.

Ahaha, yes. I noticed this myself, just recently, when I was doing a lot
of work examining hits and misses.

> Is it possible to stop this happening?

Unfortunately, it's an imperfect world. I was working on something a little
different: A lot of (dare I use the same appellation? Why not?) facist little
java chat clients kept forcing reloads of the same images through the cache.
Sometimes they reloaded the same GIF five or six times in a row. All misses.

I _did_ hack up a patch at the time, that overrides 'sensible' behaviour, and
forces the cache to deliver the cached object _regardless_ of whether it's
been told to reload it or not (pragma-no-cache, cache-controls, expiry,
max-age) facistly beats them all.

However, it only works by using a magic number in a refresh_pattern. I'm
not on firm enough ground with squid's flow-of-control to catch connection
and/or dns failures and loop round to pull off a cached object.

Mind you, I _am_ in favour of the option. There are times when nonsensical
circumstances require nonsensical behaviour, and I'm in favour of _options_
that support that.

Right at this moment, I'm hip-deep in squid 1.2, looking at ways to patch
in the various nonsensical (but eminently practical) things that are
required for work, as well as other things that Brisnet needs. The two
groups intersect. It all comes down to a limited 'violation' of the rules
in some respect or another (ignore reloads on certain objects; honour reloads
but ignore expiry and freshness; remove from cache only after all other
candidates have been removed...that sort of thing)

Received on Mon Apr 27 1998 - 08:57:27 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:39:57 MST