Re: [squid-users] New squid machines.

From: Robin Stevens <robin.stevens@dont-contact.us>
Date: Wed, 10 Oct 2001 15:07:32 +0100

On Wed, Oct 10, 2001 at 10:16:43AM +0100, Palmer J.D.F. wrote:
> I want to cluster/load balance 2 maybe 3 possibly 4 machines for better
> peak performance and better resilience over our existing systems. I am
> open to suggestions though.

> What spec should I be looking at, what is more important?
> how much disk space
> how much RAM
> RAID?
> single or duel processor
  
RAM is related to disk space - essentially allow around 10MB per GB of
cache disk, plus overheads. Generally there's little point in going for
RAID, instead go for one partition per cache disk. A second CPU is
unlikely to make a significant difference, although Joe Cooper recently
mentioned an SMP machine running two separate instances of squid.

We're in the process of changing from Sun (Ultra 10, 440MHz, 768MB, 6x9GB
cache disk, Solaris 8) to Intel hardware. The two currently in service are
Dell Poweredge 2500 servers, single 1000MHz CPU, 1.5GB RAM, 8x18GB 10krpm
disk - you should be able to get three for around your given budget.

Software is a Redhat 7.1-based install with 2.4.7 kernel, 7 reiserfs cache
disks, Squid 2.4STABLE2, async-io. The two machines in service are now
routinely handling 150-200 requests per second - in tests last week I
managed to push the load a bit higher but things were looking distinctly
uncomfortable by 250req/sec. As a comparison, our Solaris boxes struggled
above 60-70 req/sec.

The Dells have been handling all our traffic of late: now term has started
again, 15 million requests and nearly 100GB a day. I'm now planning to
ditch the old Solaris servers (we can find plenty of uses for them
elsewhere) and replace them with a couple more Dell machines to ensure that
we've got a reasonable amount of spare capacity. I might be tempted to put
more RAM on them this time, given that the price difference between 6x256
and 4x512MB has fallen dramatically since the first two were purchased.

One major gripe has been Dell's customer care - the servers were shipped
incorrectly and this took a *lot* of hassle to sort out. I'd like to think
this was very unusual.

> What methods of load balancing/clustering are available? How reliable are
> they.
  
We're using an Alteon AceDirector 3, which works very well, but these are
not at all cheap. You might be better going for something like Linux
Virtual Server, as used by the JANET cache clusters, if budget is an issue.
If you're using cache.pac files there are tricks you can do with these as a
very cheap form of load-balancing - see Duane Wessels' O'Reilly book.

> We currently run Squid 2.2Stable5 should I be looking into upgrading this?
> What kind of overhead does authentication through Squid place on the Squid
> servers.
  
I've no experience of this as we're using interception proxying (the
acedirector does allow us to exclude a handful of sites which for various
reasons can't go through the proxy).

> If there was a slow machine in the cluster would it have an effect on the
> other servers?

It can make things more awkward, depending on how the load-balancing is set
up. We have our Sun machines configured as "overflow" servers when the
Dells hit predefined load limits, but they've not actually been required
except during Code Red/Nimda storms! The acedirector does have the option
of server weights, but not if one wants to perform hashing by destination
IP in order to ensure all traffic to the same site keeps going through the
same server.

-- 
--------------- Robin Stevens  <robin.stevens@oucs.ox.ac.uk> -----------------
Oxford University Computing Services ----------- Web: http://www.cynic.org.uk/
------- (+44)(0)1865: 273212 (work) 273275 (fax)  Mobile: 07776 235326 -------
Received on Wed Oct 10 2001 - 08:08:02 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:02:40 MST