Re: Squid 2.x performance curioisities

From: Dancer <dancer@dont-contact.us>
Date: Mon, 26 Apr 1999 10:08:55 +1000

Alex Rousskov wrote:
>
> On Mon, 26 Apr 1999, Dancer wrote:
>
> > It was empty at the start of the tests. I've been letting it fill, but
> > it's not hit 10% yet, so no...no garbage collection at all.
>
> Note that Unix FS will slow down as disks get full. One may be able to
> start filling at 120/sec but may have to decrease the rate to 50req/sec
> when the disks are close to full. That is one of the reasons to keep cache
> disks only 50% utilized...
>
> Try this: Right after the experiment died at 120/sec. Clear the cache (or
> re-point cache_dir to an empty disk) and quickly restart the experiment. If
> you get back to 120/sec, it may be the file system playing its dirty tricks
> on you...
>
> Alex.
>
> P.S. I assume you've checked "netstat -m" and "netstat -p tcp" for network
> related warning signs... :)

Had my brain in upside-down. I was running a 90/sec test (not 100/sec as
I mentioned some moments ago), which was holding okay at 600MB
buffering, but only barely. Pausing the test for ten seconds, and then
letting it go again spiralled everything out of control.

Ran a ten-minute goal with dhr==0 && rep_cachable==0 &&
req_rate==120/sec. Steady-state connections in use ~ 400. Success. Left
the cache and disks as were (16% full, 600MB buffered)

Repeat tests at different rates:
160/sec rapid lose.
150/sec less rapid lose.
140/sec Better, but still a lose.
130/sec Stable at ~450 connections

(reference: The client box maxes out around 1900 descriptors)
(benchmark: Initial tests without squid showed rates of 200/sec were
stable on this network, on the basic polymix set)
Received on Tue Jul 29 2003 - 13:15:57 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:12:06 MST