Re: Cacheing search engine results

From: Duane Wessels <wessels>
Date: Wed, 14 Aug 96 13:24:06 -0700

andrew@vancouver-webpages.com writes:

>I just got a copy of Squid 1.0.5 and was experimenting with
>my cache test page at http://vancouver-webpages.com/cache-test.html
>
>I found that the default config of Squid won't cache URLs with "cgi-bin"
>in. I have another directory "cgi-pub" which I use (it isn't in /robots.txt)
>but was still having a problem cacheing GETs with an argument.
>
>Apache 1.1.1 will cache these documents, provided that they don't have an
>illegal Expires header, and do have a legal Last-Modified header.
>
>It seemed to me that one ought to be able to cache the output of a search
>engine - the engine can be written to generate appropriate headers.
>I have set http://vancouver-webpages.com/cgi-bin/searchBC2?pizza to
>expire in one day - if someone makes the identical reqeust during that time,
>I figure they ought to be able to get a cached copy.
>
>(also copied to /cgi-pub/searchBC2)
>
>Is Squid explicitly not caching requests with an argument? I couldn't
>quickly see it in the code.

The default config file is hard-wired to not cache any URLs with
"cgi-bin" or "?". Part of the logic is that cgi-bin scripts are
dynamic and should not be cached. Also, we don't expect really any
hits of cgi-bin URLs, so why bother caching them (also assuming disk
space is a valuable commodity).

Duane W.
Received on Wed Aug 14 1996 - 13:24:07 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:32:48 MST