Re: [squid-users] Ignoring query string from url

From: Henrik Nordstrom <henrik_at_henriknordstrom.net>
Date: Mon, 27 Oct 2008 11:48:47 +0100

On mån, 2008-10-27 at 14:57 +0530, Nitesh Naik wrote:

> Is there any sample code available for url rewriter helper which will
> process requests in parallel?

It doesn't need to process them in parallell unless you really need to
scale the rewrites on multiple CPUs or threads making callouts to other
servers. One signle thread processing one at a time is sufficient for
your problem. The difference in your case is in how Squid uses the
helper.

Example script removing query strings from any file ending in .ext:

#!/usr/bin/perl -an
$id = $F[0];
$url = $F[1];
if ($url =~ m#\.ext\?#) {
        $url =~ s/\?.*//;
        print "$id $url\n";
        next;
}
print "$id\n";
next;

Or if you want to keep it real simple:

#!/usr/bin/perl -p
s%\.ext\?.*%.ext%;

but doesn't illustrate the principle that well, and causes a bit more
work for Squid.. (but not much)

> I am still not clear as how to write
> help program which will process requests in parallel using perl ? Do
> you think squirm with 1500 child processes works differently
> compared to the solution you are talking about ?

Yes.

Regards
Henrik
Received on Mon Oct 27 2008 - 10:49:02 MDT

This archive was generated by hypermail 2.2.0 : Mon Oct 27 2008 - 12:00:05 MDT