Re: IMS bugs and proposals

From: Balint Nagy Endre <>
Date: Wed, 9 Oct 1996 03:27:39 +0200 (MET DST)

Earl Fogel writes:
> On Mon, 7 Oct 1996 Christian Balzer <> wrote:
> >However people who think that the latest Dilbert strip has
> >to be out now can and will hit the "Reload" button, a lot. Right now
> >I estimate that about 20-30% of the external accesses are TCP_REFRESH
> >requests of perfectly valid data. An option to Squid which allows
> >these requests to be transformed into an IMS/size check procedure would
> >greatly reduce this load.
> Yes, please.
> Also, when you hit Reload, Netscape includes a Pragma: no-cache HTTP header
> in the request, and squid doesn't service the request from it's cache. I'd
> think that's the correct behavior for GET requests, but when a request
> includes both an LMS and a Pragma: no-cache, *and* squid has a cached copy
> of the document that's newer than the one the browser has, then I think
> squid should return it's cached copy.
Pragma: no-cache added only if the client hits super-reload (Shift-click on reload)
> So, hitting Reload would always give you a newer copy of a document
> (if one is available), but it wouldn't necessarily give you the newest
> copy.
> >Incidently I do see some TCP_IMS_HIT in the
> >access logs, so some browsers must be doing things more sensible
> >than Netscape (or am I missing something obvious here?).]
What you see is depends on the server too.
If you have a harvest 1.4pl0 parent, that definitely will give 200 responses
instead of 304 responses!
The same problem happens if the origin server doesn't support IMS. By theory
IMS is required by HTTP/1.0 but someone may disable it by not giving useful
information for IMS in the resposes (no content-length, no last-modified.)
NCSA, Apache and Netscape servers can be configured for that.

In that case squid can't construct a legal GET/IMS.

As far as I know, squid does GET/IMS and expired object handling properly,
the problem is somewhere else, in parent proxy or in origin server!
(You should install 1.1beta3 or newer to keep objects beyond their expiration!)

Another solution to the problem to set heuristic TTL appropriately for Dilbert strip
to expire when the new one comes out. If you don't know when a new one should be
expected, analise squid logfiles and find out WHEN the reload storm starts.
Even may install a cron job which does a 'client -r -s' on the URL in question to
prefetch it. AND don't forget to tell users, that Dilbert or whatsoever is
prefetched, and the cache keeps the latest one always, no need for forced reloads.
(but users should not use 'never check' mode in their browsers!)

Andrew. (Endre "Balint" Nagy) <>
Received on Tue Oct 08 1996 - 18:33:07 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:33:15 MST