Cacheing search engine results

From: Andrew Daviel <andrew@dont-contact.us>
Date: Fri, 9 Aug 1996 13:40:30 -0700 (PDT)

I just got a copy of Squid 1.0.5 and was experimenting with
my cache test page at http://vancouver-webpages.com/cache-test.html

I found that the default config of Squid won't cache URLs with "cgi-bin"
in. I have another directory "cgi-pub" which I use (it isn't in /robots.txt)
but was still having a problem cacheing GETs with an argument.

Apache 1.1.1 will cache these documents, provided that they don't have an
illegal Expires header, and do have a legal Last-Modified header.

It seemed to me that one ought to be able to cache the output of a search
engine - the engine can be written to generate appropriate headers.
I have set http://vancouver-webpages.com/cgi-bin/searchBC2?pizza to
expire in one day - if someone makes the identical reqeust during that time,
I figure they ought to be able to get a cached copy.

(also copied to /cgi-pub/searchBC2)

Is Squid explicitly not caching requests with an argument? I couldn't
quickly see it in the code.

Andrew Daviel

andrew@vancouver-webpages.com
http://vancouver-webpages.com : home of searchBC
Received on Fri Aug 09 1996 - 13:41:59 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:32:46 MST