Re: [squid-users] Scalability in serving large ammount of concurrent requests

From: Denys Fedoryschenko <nuclearcat_at_nuclearcat.com>
Date: Sat, 2 May 2009 16:42:34 +0300

I have squid serving

Proxy-Karam-Main ~ # netstat -s|grep -i estab
    41583 connections established

client_http.requests = 1364.257457/sec
client_http.kbytes_out = 16067.017328/sec

At peak time it reach 60-70K and sometimes even 208 Mbps. But it is serving
real customers, and not as accelerator. And many tuneups is done.

For offloading i recommend to try nginx. It works well for static content and
even work very good as fastcgi frontend.

On Saturday 02 May 2009 12:39:04 Roy M. wrote:
> Hey,
>
> On Sat, May 2, 2009 at 4:24 PM, Jeff Pang <pangj_at_arcor.de> wrote:
> > We use Squid for reverse proxy for the popular webmail here, serving for
> > static resources like images/css/JS etc. Totally 24 squid boxes, each has
> > the concurrent connections more than 20,000. For small static objects,
> > Squid has much higher performance than Apache.
> >
> > But as I once submitted a message on the list, Squid can't get high
> > traffic passed through. I never saw Squid box has the traffic flow to
> > reach 200Mbits/Sec. While in some cases lighttpd (epoll +
> > multi-processes) can get much higher traffic than Squid.
>
> So did you tried Apache/Lighty + mod_proxy ?
Received on Sat May 02 2009 - 13:43:51 MDT

This archive was generated by hypermail 2.2.0 : Sun May 03 2009 - 12:00:01 MDT