Re: [squid-users] 50 requests per second

From: Arindam Haldar <arindam@dont-contact.us>
Date: Fri, 28 Jun 2002 15:35:46 +0530

do u have any tools to perform these kind of test ? ..
can any one in the list give some advice on any such tools they use .. ?
it will be great help for other giving thier experience with various hardware
limits thy have !.. maybe help in making squid a better cache/proxy( which it
already is :)

On Friday 28 June 2002 12:04 pm, Joe Cooper wrote:
> Robin Stevens wrote:
> > On Fri, Jun 21, 2002 at 01:08:00PM -0400, Robert Adkins wrote:
> >>May I ask some specs on that one box that you are using? It would help
> >>greatly to know the level of hardware required for such an install. It
> >>might end up being more cost-effective and less of a headache if there is
> >>system failure, to run two systems.
> >
> > While I can't speak for Joe's systems, we have servers capable of
> > sustaining peak loads well in excess of 200 requests/second without
> > noticeable loss of performance. In tests I've had them as high as
> > 300/sec with significantly higher (but not unbearable) latency. But I've
> > yet to be convinced that the current four servers will be enough to see
> > us through the 2002/3 academic year if traffic continues to grow at the
> > present rate...
>
> The fastest machine I've shipped so far:
>
> 1.26GHz PIII
> 2GB RAM
> 2 x 15k RPM 36GB disks (approximately 36GB configured for cache_dir)
>
> I benchmarked this box at 240 reqs/sec for four hours on a full cache,
> and didn't have time to really push it beyond its limits. I wouldn't
> expect it to sustain that in an ISP environment however, and would
> recommend it for a significantly less demanding life (I'm comfortable
> calling it a 180reqs/sec machine, with ability to handle peaks over
> 220--it lives in an ISP where it peaks at 160 reqs/sec, and usually
> sustains around 120).
>
> I've also shipped a dual PIII machine at 1GHz, with three 15k 18GB disks
> and 2GB of RAM, but that requires dual processes to take full advantage
> of the multiple CPUs--a nuisance, and the CPU affinity of current Linux
> kernels is non-existent, so the CPU hopping is pretty costly. It is
> still faster than the 1.26GHz machine, though, and lives at an ISP where
> it has been seen doing 225 reqs/sec for about 3-4 hours sustained in the
> evenings. I call it 'full'...and if their needs increase much we'll
> have to cluster.
>
> We've recently gotten word from the manufacturer that we can begin
> shipping P4 and Xeon systems, so I'm looking forward to trying out a
> 2.4GHz machine, or a Xeon 1.7GHz with up to 12GB of RAM and 4 disks all
> in a 1U chassis. ;-)
>
> > Our hardware is based around Dell Poweredge servers: single PIII CPU, 1.5
> > or 2GB RAM, 7x 10000 or 15000 rpm cache drives giving about 100GB of
> > cached data per server. Software is based around Redhat Linux with 2.4.x
> > kernel and reiserfs cache partitions (mounted noatime,notail) and squid
> > 2.4Stable6. I'll probably be investigating 2.5 over the summer while
> > we've got the spare capacity for me to perform tests.
>
> These specs are fine. I wouldn't change a thing (except less cache_dir
> and more RAM). ;-)
>
> > Joe always seems to be recommending considerably more RAM for that amount
> > of cache disk, but I guess this depends on the size of the average stored
> > object. In our case this is around 20k - we get a lot of downloads into
> > tens or even hundreds of megabytes.
>
> My RAM recommendation assumes pretty strict disk I/O throughput
> limitations (the 2-3 disks that we can fit into our 1U chassis don't
> provide as much I/O bandwidth as is really needed to max out the CPU
> once you get up to these speeds). You get around that limitation mostly
> by providing 7 disks, which is a honking lot of throughput, and very
> good disk availability. Another reason is that latency reduction is the
> prime motivation for caching in most of my clients environments. ISPs
> in the US are not strapped for bandwidth in most cases, but they do want
> a 'leg up' on the competition in their area, by providing a 'snappier'
> browsing experience (and if it saves them a few hundred bucks a month on
> bandwidth, that's a nice bonus).
>
> As I've always said, a safe number is 10MB of RAM for every GB of
> cache_dir. In your case 100GB of disk is 1GB of RAM. You then double
> it, and go zoom. You're fine, and not too far off from where I spec our
> low end boxes, where performance isn't the primary concern. Though I'm
> still only shipping 36GB or maybe 40GB of cache storage in a 2GB machine
> (but those boxes are wicked fast). It would be safe to go higher, but
> not so good for performance. If I ever have a client interested in more
> disks, we can get a nice 2U chassis that will support 6 disks, so maybe
> I've get to give one of those a go sometime with a 100GB of cache_dir.
>
> It's all about balance. And RAM being dirt cheap these days.
Received on Fri Jun 28 2002 - 04:00:25 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:08:52 MST