Re: [squid-users] better performance using multiple http_port

From: Amos Jeffries <squid3_at_treenet.co.nz>
Date: Tue, 23 Feb 2010 13:51:03 +1300

On Mon, 22 Feb 2010 19:28:39 -0300, Felipe W Damasio <felipewd_at_gmail.com>
wrote:
> Hi All,
>
> I'm using squid on an ISP. (300Mbps).
>
> We tried both squid-2.7stable7 and squid-3.1.0.16 and had the same
> results.
>
> The time to do a "/usr/bin/time squidclient
> http://www.terra.com.br/portal" goes down almost immediately after
> starting squid.
>

Please define 'down'. Error pages returning? TCP links closing
unexpectedly? TCP links hanging? packets never arriving at Squid or web
server?

Is it only for a short period immediately after starting Squid? (ie in the
lag time between startup and ready for service?)

> We tried turning off the cache so we can't have I/O-related
> slowdowns and had the same results. Neither CPU nor memory seem to be
> the problem.

How did you 'turn off the cache' ? adding "cache deny all" or removing
"cache_dir" entries or removing "cache_mem"?

>
> The only solution we could find that helped (didn't solve the
> problem, though) is using multiple http_port, one for each user
> network. (our ISP has 30 different user networks).
>
> We then used iptables to direct each network to a different
> http_port (all with tproxy), and the time improved on the http_ports
> that have fewer users...
>
> But since squid doesn't use multicore, and our CPU usage didn't go
> up or down, same with memory, why does this helps?
>
> Is this the correct behavior?
>
> We don't really get it because it's still a single squid
> process....could this be related to /proc configuration or
> network-configuration?
>
> Thanks in advance.
>
> Felipe Damasio

If you are using TPROXY it could be network limits in conntrack. Since
TPROXY requires socket-level tracking and the default conntrack limits are
a bit low for >100MB networks.

It could simply be an overload on the single Squid. Which is only really
confirmed to 5,000 requests per second. If your 300MBps includes more than
that number of unique requests you may need another Squid instance.
These configurations are needed for high throughput using Squid:
 http://wiki.squid-cache.org/ConfigExamples/MultiCpuSystem
 http://wiki.squid-cache.org/ConfigExamples/ExtremeCarpFrontend

Amos
Received on Tue Feb 23 2010 - 00:51:08 MST

This archive was generated by hypermail 2.2.0 : Tue Feb 23 2010 - 12:00:06 MST