RE: [squid-users] Performance problems - need some advice

From: Gregori Parker <gregori@dont-contact.us>
Date: Tue, 7 Feb 2006 15:36:29 -0800

Yes, please keep it on the squid-list...I for one am interested in this thread.

I just deployed 3 squid servers in a similar configuration (reverse-proxy serving large media files)...except each server of ours is dual 3Ghz Xeon, 64-bit everything, 4GB RAM and around a TB each of dedicated cache space (aufs on ext2 with noatime option). They are running Squid 2.5 STABLE12 on Fedora Core 4 x86_64. Disk performance looks fine to me, but I'm concerned because top reports that squid is averaging 70% cpu usage most of the time.

Can anyone recommend techniques for assessing squid performance? I have no good way of benchmarking our clusters since SNMP isnt ready quite yet. Please don't mention cache_mgr, thanks :)
 

-----Original Message-----
From: Kinkie [mailto:kinkie-squid@kinkie.it]
Sent: Tuesday, February 07, 2006 3:23 PM
To: Jeremy Utley
Cc: Squid ML
Subject: Re: [squid-users] Performance problems - need some advice

On Tue, 2006-02-07 at 12:49 -0800, Jeremy Utley wrote:
> On 2/7/06, Kinkie <kinkie-squid@kinkie.it> wrote:
>
> > Profiling your server would be the first step.
> > How does it spend its CPU time? Within the kernel? Within the squid
> > process? In iowait? What's the number of open filedescriptors in Squid
> > (you can gather that from the cachemgr)? And what about disk load? How
> > much RAM does the server have, how much of it is used by squid?
>
> I was monitoring the servers as we brought them online last night in
> most respects - I wasn't monitoring file descriptor usage, but I do
> have squid patched to support more than the standard number of file
> descriptors, and am using the ulimit command according to the FAQ.

That can be a bottleneck if you're building up a SYN backlog. Possible
but relatively unlikely.

> When I was monitoring, squid was still building it's cache, and squid
> was using most of the system memory at that time. It seems our major
> bottleneck is in Disk I/O - if squid can fulfill a request out of
> memory, everything is fine, but if it has to go to the disk cache,
> performance suffers.

That can be expected to a degree. So are you seeing lots of IOWait in
the system stats?

> Right now, we have 5 18GB SCSI disks placing our
> cache, 2 of those are on the primary SCSI controller with the OS disk,
> the other 3 on the secondary.

How are the cache disks arranged? RAID? No RAID (aka JBOD)?

> Could there perhaps be better
> performance with one larger disk on one controller with the OS disk,
> and another larger disk on the secondary controller?

No, in general more spindles are good because they can perform in
parallel. What kind of cache_dir system are you using? aufs? diskd?

> We're also
> probably a little low on RAM in the machines - each of the 2 current
> squid servers have 2GB of ram installed.

I assume that you're serving much more content than that, right?

> Right now, we have 4 Apache servers in a cluster, and these machines
> currently max out at about 300Mb/s. Our hope is to utilize squid to
> push this up to about 500Mb/s, if possible. Has anyone out there ever
> gotten a squid server to push that kind of traffic? Again, the files
> served from these servers range from a few hundred KB to around 4MB in
> size.

In raw terms, Apache should outperform Squid due to more specific OS
support. Squid outperforms Apache in flexibility, manageability and by
offering more control over the server and what the clients can and
cannot do.

Please keep the discussion on the mailing-list. It helps get more ideas
and also it can provide valuable feedback for others who might be
interested in the same topics.

-- 
Kinkie <kinkie-squid@kinkie.it>
Received on Tue Feb 07 2006 - 16:36:34 MST

This archive was generated by hypermail pre-2.1.9 : Wed Mar 01 2006 - 12:00:03 MST