Re: minimum-cache time?+

From: Dancer <>
Date: Mon, 28 Oct 1996 01:38:40 +1000

Miguel A.L. Paraz wrote:
> Hi,
> David J N Begley wrote:
> > I don't get it - why is this necessary when (certainly from our stats, and
> > seemingly others based upon problems people are seeing with Microsoft's
> > site) clearly Microsoft's stuff can be cached *already*? Or does Squid
> > 1.1 act differently to 1.0?
> Perhaps for other objects within MS, but,
> tells us not to cache it, three times in the MIME headers.
> >
> > Hasn't stopped anyone else causing problems - why should it stop us from
> > trying to fix 'em (as terrible as protocol violation is)?
> Yeah, but if we break protocols in response to others breaking them,
> the word "protocol" loses it's meaning...

Caches by their very nature are designed to save us bandwidth. Until we
established a functional caching system we were about ready to limit
connectivity. Network prices aren't cheap in our neck of the woods, and
the way things are going, those prices are going to go up, rather than
down. (politics *sigh*)

By all means, Squid should behave correctly and conformantly. *By

There are times, however, when (for one reason or another..It'll vary
from site to site, and from instance to instance) that the default
behaviour is something you very _definitely_ don't want.

Assume (just for the moment) that puts up
their hot new 22MB shareware game on the WWW for download. Oh, and
they've got a lame server that insists (for some stupid reason) that
intermediate caches should not cache it. Well, there's a case for
changing the default behaviour, when you suddenly find that virtually
every user wants to suck it down at 22MB a pop.

The URL redirection facility is one such method of altering the cache's
default behaviour to save us bandwidth. That's if
BLOODDEATHNUKEKILL-5D.EXE has mirrors. Not all HTTP hierarchies have
mirrors that we can do sensible redirects on. So we then need another
method of changing default behaviour. Maybe just for one document. Maybe
for a sub-tree. Maybe for a class of files. Maybe for a whole site.

Fact is, all of us, at some point or another are going to _really_ need
some method of breaking those protocols in the cleanest way possible,
and usually because someone else did it first. We're using caches to
save bandwidth, the HTTP/1.1 protocol (particularly) is designed to help
us get the maximum benefit from our caches, but then again (like most
things) it can be used improperly to everyone's detriment.

It's bad not to have access to damage control, at that point, even if it
violates accepted protocol. The fireman who saves your ass usually does
not knock on the door, in violation of civilized protocols ;)

Cheap shot, I know, but hey, it's late :)

Received on Sun Oct 27 1996 - 07:39:09 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:33:23 MST