Re: Can I do thsi with squid ?

From: Stephane Bortzmeyer <bortzmeyer@dont-contact.us>
Date: Tue, 27 Aug 96 09:39:57 +0200

On Monday 26 August 96, at 17 h 8, the keyboard of
richard@hekkihek.hacom.nl (Richard Huveneers) wrote:

> >Rather than add this to squid. I think it would be better to have another
> >program that takes a list or URLs and fetches them, through squid, during
...
> Such a util already exists. It's called 'geturl'. Please note that there are

There are many such programs. I use "prefetch", based on "url_get".
prefetch takes an HTML file as an argument and retrieves every URL in it,
with url_get (which has proxy support). It would be trivial to save the
retrieved HTML file and run prefetch again against it, thus providing
depth support.

Another option is the Harvest gatherer. While its main purpose is to
gather files for later indexing, it can be used as an efficient
prefetcher, with proxy and depth support (many tunable options in this
case).

You can also use the last (still beta) version of echoping which includes
HTTP (optionally with proxy).

References :

1) prefetch <URL:http://cache.cnrs.fr/prefetch.txt>
url_get was on <URL:http://uts.cc.utexas.edu/~zippy/url_get.html> but
seems to have moved. You can ask me for a copy if you wish (with a patch
to support cache_object:// URLs).

2) Harvest <URL:http://harvest.cs.colorado.edu/>

3) echoping <URL:ftp://ftp.pasteur.fr/pub/Network/echoping> (version 2.0,
with HTTP support is still beta, request it by mail)

4) webcopy and others can be found through WWW FAQ
<URL:http://www.boutell.com/faq/>
Received on Tue Aug 27 1996 - 00:41:46 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:32:51 MST