Efficient way of enumerating squid contents?

From: Chris Wedgwood <chris@dont-contact.us>
Date: Fri, 9 Oct 1998 17:37:11 +1300

As best as I can see - there is no easy way without walking the list
of all squid objects to determine which objects match a particular
pattern?

I ask this because I'd like to be able to purge all URLS matting some
sort of regex, something like:

   client -m PURGEREGEX http://some.site/ or
   client -m PURGEREGEX .*lewinksy.*

but the only way to do this is to slowly walk all objects withing the
squid, purging them one at a time, something that would take a _very_
long time on most sizeable caches.

The other thing I'd like to be able to do is case-folding of the URL
before hashing it, perhaps by regex rules. Something like

   case_fold_url -i "\.asp$"
   case_fold_header -i "Server: Microsoft"

This one is a little tricky though... we have to take the requested
URL, and try it first case-folded, then try it without. No big deal,
just a little more expensive...

I can see anywhere this would screw anything legitimate up, assuming
the case_fold_* directives are used properly - can anyone else?

-cw
Received on Tue Jul 29 2003 - 13:15:54 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:11:56 MST