Re: Squid regex comparison speed

From: Chris Tilbury <cudch@dont-contact.us>
Date: Wed, 17 Jun 1998 09:07:51 +0100

On Wed, Jun 17, 1998 at 10:06:35AM +1000, Andrew Smith wrote:

> We also have an interest in this issue, since many subscription
> information services overseas still insist on IP based access control.
> This means that we really need our local squid caches to go _direct_ for
> about 200 site regexes, rather than going via their parents. Indirect
> information I obtained indicated that this would render our heavily-hit
> caches unusable.
> I have not done nearly enough research on this, but since the discussion
> thread was active, I wondered if anyone else has tried this.

We have this same situation over here, although we're not at 200 sites yet
(I'm still waiting for the complete list of URLs so I can prime our cache
with it, from our library).

After giving up (the problem is that many of the journals seem to use
multiple different hosts/domains) on using URL regexps, I'm experimenting
with using the information that the netdb gives me to determine an
appropriate CIDR block to use in a "dst" acl.

This seems to work quite well -- in fact, I have noticed that there are
certain services we access from different companies which are all within the
same block (mainly ones with servers that they seem to have placed within
JANET especially to handle UK HE traffic).

It won't catch them all, of course, and it is quite a "coarse" mechanism,
but it seems easier and more effective than using URL regexps.

Cheers,

Chris

-- 
Chris Tilbury, UNIX Systems Administrator, IT Services, University of Warwick
EMAIL: cudch+s@csv.warwick.ac.uk PHONE: +44 1203 523365(V)/+44 1203 523267(F)
                            URL: http://www.warwick.ac.uk/staff/Chris.Tilbury
Received on Wed Jun 17 1998 - 01:09:07 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:40:43 MST