RE: [squid-users] HELP!!!

From: John Cougar <cougar@dont-contact.us>
Date: Wed, 23 Nov 2005 09:15:31 +1100

Hiya Chuck

Adding a second, hot-standby L4 switch is probably a good idea ... although
in fairness to Alteon, I haven't seen one freeze nor fall over since the
AD180e, and I've been running them in anger for about seven years. Which
Alteon do you have? I assume AD4?

You haven't really asked anything too specific, but I'm happy to take this
offline if you want to chat about it. Just reply sans the list, if you want.

Cheers,

John.

> -----Original Message-----
> From: Chuck Dutrow [mailto:chuckdutrow@yahoo.com]
> Sent: Wednesday, 23 November 2005 12:52 AM
> To: John Cougar
> Cc: squid-users@squid-cache.org
> Subject: RE: [squid-users] HELP!!!
>
> John
> Yes this may seem to be a bit much but we are setting
> up a new wireless isp and need to be sure it doesnt
> have any trouble!! We need very high available
> fail-over. We are actually looking at using only 2
> servers with squid transparently cacheing for all
> users and not sure how to setup squid for highest
> performance as cache server. We have transparent
> working with one server but need advice beyond that.We
> also have dual wan circuits load balanced to the
> Alteon, I am thinking of adding a second Alteon
> because that is the single point of failure.
> Thanks
> Chuck
>
> --- John Cougar <cougar@telstra.net> wrote:
>
> > Chuck
> >
> > All that for only 1K users?
> >
> > Sounds like a bit of a waste, unless of course
> > you're seeking a very highly
> > available fail-over scenario; the HPs are already
> > fairly highly available
> > (hot swap everything, redundant everything,
> > depending on what you have
> > bought), in which case a single Alteon will also be
> > a single
> > point-of-failure, as will its uplink.
> >
> > Are you planning to intercept "transparently" (ie
> > force everyone thru the
> > cache)? That's about the only deployment scenario
> > that would make sense, and
> > even then you're highly powered.
> >
> > > Please tell me what the absolute fastest model is?
> > I
> > > have AceDirector layer 4 switch redirecting
> > directly
> > > to squid. I have squid installed out of the box
> > config
> > > standard transparent setup on a Compaq DL580
> > server
> > > Quad Xeon 700Mhz 2.5 G ram and 4 - 18 gig SCSI
> > drives
> > > setup in two raid 0 for speed with RedHat OS.
> > >
> > > I Have 3 of these quad servers and am thinking
> > of
> > > parent-child but need advise as to over all plan.
> > This
> > > is being setup with 1000 web users in mind.
> > >
> >
> > I'd use that kind of power for a small country, but
> > if you have it, it
> > should absolutely smoke. I question your choice of
> > Linux, but it may be OK,
> > just steer away from the ext3 FS, and definitely no
> > journalling (you
> > wouldn't, right??). I've had good success with
> > FreeBSD v4.x with Squid on
> > HP, goes like stink and few noticable FS peformance
> > problems, but them the
> > right choice of FS under RH may work OK.
> >
> > As for peering the cache system, I have mixed
> > feelings on this one. I have
> > rarely seen deployment scenarios whereby the
> > cacheable content mix present
> > on a system of caches performed better through ICP
> > than through refetching
> > from the source (and hence redundant objects present
> > across the caches)
> > except for the longest lived objects, which are
> > usually small-to-average in
> > size anyway, unless you are at the bottom of a
> > really slow uplink.
> >
> > I have, at times, choked up transit links with ICP
> > overhead, back in the
> > days where these links were small, but then I ran my
> > system in distributed
> > farms across a geographically dispersed topology ...
> > it sounds to me like
> > you're clustering these boxes at one point?
> >
> > Need more data ...
> >
> > J.
> >
> >
>
>
>
>
>
> __________________________________
> Yahoo! Mail - PC Magazine Editors' Choice 2005
> http://mail.yahoo.com
Received on Tue Nov 22 2005 - 15:15:36 MST

This archive was generated by hypermail pre-2.1.9 : Thu Dec 01 2005 - 12:00:10 MST