Re: [squid-users] What is the max number of Squirm redirect_children?

From: Leonardo <leonardodiserpierodavinci_at_gmail.com>
Date: Mon, 21 Nov 2011 16:44:04 +0100

Thanks Amos.

> As for the title question: You are the only one who knows that. It depends
> entirely on how much RAM your system has and how much is being used (by
> everything running). The number which can run on your system alongside Squid
> and the OS and everything else without causing the system to swap.

The server on which Squid runs has about 3 Gb of RAM and sports a 3
Ghz processor.
I'm testing it right now with no network connection (I can't do live
tests for the moment). Spawning 80 instances of Squirm makes the
machine crawl for a few minutes, but eventually everything becomes
reusable and there are no page-ins/page-outs, according to vmstat.

>> Squid Cache (Version 3.1.7): Terminated abnormally.
>
> Please try a more recent 3.1 release. We have done a lot towards small
> efficiencies this year.

Unfortunately I can't upgrade right now, but I hope I'm able to do it soon.

> I'd also look at what Squirm is doing and try to reduce a few things ...
>  * the number of helper lookups. With url_rewrite_access directive ACLs
>  * the work Squid does handling responses. By sending empty response back
> for "no-change", and using 3xx redirect responses instead of re-write
> responses.
>
> You may also be able to remove some uses of Squirm entirely by using
> deny_info redirection.

I use Squirm uniquely to force SafeSearch on Google via these regex patterns:
regexi ^(http://www\.google\..*/search\?.*) \1&safe=active
regexi ^(http://www\.google\..*/images\?.*) \1&safe=active

Hmmm... now I am wondering whether I could achieve the same effect
through a Perl script to call via redirect_program...

Thanks for your time, and best regards,

L.
Received on Mon Nov 21 2011 - 15:44:11 MST

This archive was generated by hypermail 2.2.0 : Tue Nov 22 2011 - 12:00:03 MST