Re: Features

From: Gregory Maxwell <nullc@dont-contact.us>
Date: Mon, 24 Nov 1997 00:23:34 -0500 (EST)

On Mon, 24 Nov 1997, Dancer wrote:
[creative snipping follows]
> Gregory Maxwell wrote:
>
> > reload_into_ims for 1.1. This is really helpful for me... Many users are
> > unwilling to take the reload button lightly.. It would be nice if this
> > feature were further enhanced so that more then X reloads by client C in Y
> > time-units will cause a reload (for servers with broken IMS).. This could
> > taken even further, if there is Z reloads in Y time-units then for a
> > direct (incase the hiearchy is stale).

> I didn't _quite_ follow that...

Sorry, I can be hard to follow.. :)
Squid has the ablity to prevent users from forcing a reload and sending a
IMS request insted..

Unfortunatly, some webservers are broken and dont reply to an IMS, so that
these pages never are reloable..

I am suggesting that if a users hits reload severl times in several
minutes then it should do the default behavior (I.e. actually reloading
the document), and if the thse does this enough it should go direct and
not fetch through the hiearichy..

>
> > Also, options to ignore, or multiply http1.1 cach-control varibles would
> > be great.. For small in house caches with SLOW connections this is a must
> > (I've got my home refresh rules set to keep many file types for a week
> > before IMSing, on the rare ocassion we are concerned about freshness we
> > just hit reload to do a IMS).. Hard drives are cheap, T1s cost tons!
>
> YESYESYESYESYES! There's not MUCH that I'd like to completely override...but
> there are _some_ things.
>

I would suggest the ability to override OR multiply by factors..

> > Compressed intercache compressions. I've made a lame hack of this for my
> > home squid: I use ssh to setup a non-encrypted compressed pipe and use
> > cache_host_acl to direct .html,.txt,.htm files through it.. This speeds of
> > page loads GREATLY! This was one of the features I was working on when I
> > was working on the cache for the Mnemonic browser. Both the gzip style and
> > LZO type compression methods would work well here.. (Esp now with
> > persistant connections, as the compressor will no longer get the 'hit' at
> > the start)
>
> (Dancer's jaw drops). Why the hell didn't _I_ think of doing that??
> Damnit...I've got four squid's in dire need! Can I swipe a look at your setup?

Setup ssh. Then find a way to keep a ssh connectiong 'tacked' open.. (I
use diald and have it start and kill it).. Have the slogin run with -C
-L1234:cachecomputer:3128 and have the local squid use 1234 as a parent.

> > Improved object dumping rules: When squid runs out of disk swap space, it
> > dumps the oldest loaded objects.. So if I have an object that gets loaded,
> > gets lots of hits, and the IMS always says the object has not changed, it
> > will be flushed out before an object that gets far less hits and comes up
> > stale ever time there is an IMS... This is not good, esp for smaller
> > caches... With the mnemonic cache I came up with a complicated formula for
> > computing the goodness of objects.. But really, tossing objects that have
> > the oldest and fewest HITS, rather then being the oldest..
>
> The 500 point purity test for cache objects. Have you ever been modified?
> Requested? Re-requested? DO you have a 'vary' header? Cache-control?

My ideas in my mnemonic paper were MUCH more complicated then even that..
Prob too much so.. It involved computing mean object size, standard dev of
object size, and magic numbers too..
Received on Sun Nov 23 1997 - 21:29:48 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:37:42 MST