RE: [squid-users] New squid machines.

From: Chemolli Francesco (USI) <ChemolliF@dont-contact.us>
Date: Wed, 10 Oct 2001 18:28:33 +0200

> On Wed, Oct 10, 2001 at 10:16:43AM +0100, Palmer J.D.F. wrote:
> > I want to cluster/load balance 2 maybe 3 possibly 4
> machines for better
> > peak performance and better resilience over our existing
> systems. I am
> > open to suggestions though.
>
> > What spec should I be looking at, what is more important?
> > how much disk space
> > how much RAM
> > RAID?
> > single or duel processor
>
> RAM is related to disk space - essentially allow around 10MB per GB of
> cache disk, plus overheads. Generally there's little point
> in going for

Plus a little of cache_mem, will you? :)

> RAID, instead go for one partition per cache disk. A second CPU is
> unlikely to make a significant difference, although Joe
> Cooper recently
> mentioned an SMP machine running two separate instances of squid.

It's of course possible. It helps to have an OS which supports
CPU pinning. At that point it's just as having two
different squids on two different boxen.

> We're in the process of changing from Sun (Ultra 10, 440MHz,
> 768MB, 6x9GB
> cache disk, Solaris 8) to Intel hardware. The two currently
> in service are
> Dell Poweredge 2500 servers, single 1000MHz CPU, 1.5GB RAM,
> 8x18GB 10krpm
> disk - you should be able to get three for around your given budget.
>
> Software is a Redhat 7.1-based install with 2.4.7 kernel, 7
> reiserfs cache
> disks, Squid 2.4STABLE2, async-io. The two machines in
> service are now
> routinely handling 150-200 requests per second - in tests last week I
> managed to push the load a bit higher but things were looking
> distinctly
> uncomfortable by 250req/sec. As a comparison, our Solaris
> boxes struggled
> above 60-70 req/sec.

CPU matters :(
Maybe an Usparc3 would compare better, AND have a better
I/O architecture.

> The Dells have been handling all our traffic of late: now
> term has started
> again, 15 million requests and nearly 100GB a day. I'm now
> planning to
> ditch the old Solaris servers (we can find plenty of uses for them
> elsewhere) and replace them with a couple more Dell machines
> to ensure that
> we've got a reasonable amount of spare capacity. I might be
> tempted to put
> more RAM on them this time, given that the price difference
> between 6x256
> and 4x512MB has fallen dramatically since the first two were
> purchased.
>
> One major gripe has been Dell's customer care - the servers
> were shipped
> incorrectly and this took a *lot* of hassle to sort out. I'd
> like to think
> this was very unusual.
>
> > What methods of load balancing/clustering are available?
> How reliable are
> > they.
>
> We're using an Alteon AceDirector 3, which works very well,
> but these are
> not at all cheap. You might be better going for something like Linux
> Virtual Server, as used by the JANET cache clusters, if
> budget is an issue.
> If you're using cache.pac files there are tricks you can do
> with these as a
> very cheap form of load-balancing - see Duane Wessels' O'Reilly book.

I'm using this one, it's very flexible. For instance, it would allow for
weighted hashing with failover, as long as you're willing to spend a
few days tweaking the .pac.

> > We currently run Squid 2.2Stable5 should I be looking into
> upgrading this?
> > What kind of overhead does authentication through Squid
> place on the Squid
> > servers.
>
> I've no experience of this as we're using interception proxying (the
> acedirector does allow us to exclude a handful of sites which
> for various
> reasons can't go through the proxy).

If you're not using NTLM auth it's reasonable. NTLM can put a noticeable
dent
in network usage, since it has to fail two full requests
_for_each_tcp_connection_.

-- 
	/kinkie
Received on Wed Oct 10 2001 - 10:18:30 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:02:40 MST