Re: some linux tuning

From: Brian <signal@dont-contact.us>
Date: Sat, 26 Jun 1999 12:15:15 -0500 (CDT)

On Sat, 26 Jun 1999 jlewis@lewis.org wrote:

> On Sat, 26 Jun 1999, Brian wrote:
>
> > PII450
> > 512MB SDRAM
> > DPT3334UW RAID controller doing RAID0 with 64MB disk cache
> > Linux 2.2.7ac4
>
> I don't know how the 3334 performs for RAID0, but I played with one for
> RAID5, and it was pretty slow. Isn't Linux software RAID0 (especially on
> a PII450) likely to give much better performance than the DPT card?

The RAID5 is not the fastest. DPT has a whole new generation of cards
using the i960 chipset (like the mylex's do) and its cheaper and alot
faster, I haven't use them yet though. The DPT driver for linux is solid
though. The RAID0 is pretty decent, actually its very good. The 64MB
disk cache really helps too. I would be interested in seeing some
benchmarks between DPT RAID0 and md RAID0.

>
> > echo "768 1280 2048" >/proc/sys/vm/freepages
> > echo "2 10 90" >/proc/sys/vm/buffermem
> >
> > file-max and inode-max just make sense. I still don't know how I feel
> > about the buffermem and freepages. Anyone else doing some proc hacking
> > feel that is sane? Is their anything else you would do as far as proc
> > tuning?
>
> I've never messed with buffermem, but I do similar things with freepages.
> You want to make sure there's enough free memory for atomic allocations to
> not fail.
>
> > 2. File descriptors increased to 8192 (2.2.7ac4)
>
> How many do you actually see used? I run some smaller squid boxes on
> Linux 2.2.10, and rarely see many more than 300 fds in use.

Well, cachmgr showed fd's peeking at like 900. I guess 8192 is overkill,
but do you think their is a performance hit in having that many set as a
high watermark? Perhaps 2000 would be more on par for now.

>
> > 3. Hardware RAID0 with 64MB disk cache (dpt3334uw controller)
>
> Before going live, did you run any disk benchmarks to see what the max
> throughput to the filesystem was and what the max seeks/second was?

I think I ran some hdparm tests, nothing too intensive.

> It would be interesting to compare bonnie, with a 1.9gb file size, to the
> same setup using a regular UW SCSI card and software RAID0.
>

it would. lmdd could work too ftp://ftp.bitmover.com/pub/lmdd.tar.gz

> > 4. 512MB memory.........cache_mem 160 MB (is that a good value? 1/3?)
> > 5. dns_children 10
>
> You can turn cache_mem down considerably. It'll grow to accomodate your
> system's needs. I suspect you need more dns_children too. Look at a ps
> aux listing. The dnsserver processes seem to grow when used, so if
> they're all the same size, you know they're all getting used and you might
> add a few more. i.e.

Hmm, I don't think its too linear though here is what ps shows:

[signal@constellation signal]$ ps aux|grep dns
signal 4420 0.0 0.0 836 348 p1 S 11:57 0:00 grep dns
squid 4125 0.0 0.1 1024 584 ? S 20:13 0:11 (dnsserver)
squid 4126 0.0 0.1 1024 576 ? S 20:13 0:02 (dnsserver)
squid 4127 0.0 0.1 1024 572 ? S 20:13 0:00 (dnsserver)
squid 4128 0.0 0.1 1012 536 ? S 20:13 0:00 (dnsserver)
squid 4129 0.0 0.1 1012 540 ? S 20:13 0:00 (dnsserver)
squid 4130 0.0 0.1 1012 536 ? S 20:13 0:00 (dnsserver)
squid 4131 0.0 0.1 1012 536 ? S 20:13 0:00 (dnsserver)
squid 4132 0.0 0.0 908 404 ? S 20:13 0:00 (dnsserver)
squid 4133 0.0 0.0 908 404 ? S 20:13 0:00 (dnsserver)
squid 4134 0.0 0.0 908 404 ? S 20:13 0:00 (dnsserver)
[signal@constellation signal]$

And here is what cachemgr shows the same instant

Dnsserver Statistics:
number running: 10 of 10
requests sent: 32373
replies received: 32373
queue length: 0
avg service time: 3 msec

  # FD # Requests Flags Time Offset
  1 10 21561 A 7.335 0
  2 11 4990 A 13.162 0
  3 12 1051 A 14.897 0
  4 13 229 A 14.812 0
  5 14 65 A 460.967 0
  6 15 18 A 43939.252 0
  7 16 2 A 54835.780 0
  8 17 0 A 0.000 0
  9 20 0 A 0.000 0
 10 42 0 A 0.000 0

hmm, I wonder why it doesn't load balance over the children?

>
> squid 6938 0.0 0.2 1024 584 ? S 00:22 0:20 (dnsserver)
> squid 6939 0.0 0.2 1012 540 ? S 00:22 0:00 (dnsserver)
> squid 6940 0.0 0.2 1012 536 ? S 00:22 0:00 (dnsserver)
> squid 6941 0.0 0.2 1012 536 ? S 00:22 0:00 (dnsserver)
> squid 6942 0.0 0.2 1012 536 ? S 00:22 0:00 (dnsserver)
> squid 6943 0.0 0.2 1012 536 ? S 00:22 0:00 (dnsserver)
> squid 6944 0.0 0.2 1012 536 ? S 00:22 0:00 (dnsserver)
> squid 6945 0.0 0.2 1012 536 ? S 00:22 0:00 (dnsserver)
> squid 6946 0.0 0.2 1012 536 ? S 00:22 0:00 (dnsserver)
> squid 6947 0.0 0.2 1012 536 ? S 00:22 0:00 (dnsserver)
> squid 6948 0.0 0.2 1012 536 ? S 00:22 0:00 (dnsserver)
> squid 6949 0.0 0.1 908 404 ? S 00:22 0:00 (dnsserver)
> squid 6950 0.0 0.1 908 404 ? S 00:22 0:00 (dnsserver)
> squid 6951 0.0 0.1 908 404 ? S 00:22 0:00 (dnsserver)
>
>
> > 6. squid 2.2STABLE3
>
> How long can you keep it running? On two different systems, I'm seeing
> leaks with 2.2.Stable[1-3]. The bigger one (about 10gb of RAID0 space for
> squid) leaks very quickly and can't run for more than a day or two without
> eating all the system's memory. The smaller one (about 2.5gb on a
> single disk for squid) seems to be leaking only a few MB per day.

I have never had a machine go down with 2.2. It runs for weeks and
months. Machine is mostly RH5.2 based, with all updates. I don't use the
squid rpm though, I compile from source. I wonder if it has to do with
certain run time options you use and I don't etc?

>
> > 7. ./configure --prefix=/usr/local/squid --enable-async-io
> > --disable-ident-lookups --enable-gnuregex
>
> I haven't used any options other than --prefix. The smaller one though is
> stock Red Hat 6.0, and I'm not sure how they built Squid.
>
> > 8. Foundry ServerIron redirecting requests to squid
> > 9. cache_dir /usr/local/squid/cache 13500 16 256
> > Is that a little small? Do you think more space is needed?
>
> For your size, that's probably too small. What do you see for Storage LRU
> Expiration Age? Also, this may be a minimal optimization, but wouldn't
> filesystem access be slightly faster if you just mount the cache on
> /squid? I remember reading about news server optimizations, and mounting
> the spool on /news rather than /var/spool/news.

I never thought about that (nor heard of it), but that may make a
difference. The drives are 10,000 rpm cheetahs, I am wanting to double it
or so, and think I will.

>
> > 10. Satellite cache prepopulation feed (www.skycache.com)
>
> It saves lots of bandwidth for your news feed if you run a news
> server...but I'm not convinced it helps much with web caching.

me neither. We have 2 satellite feeds for news, and get 20GB+ just over
satellite. I am very skeptical about cache prepopulation, but hey, if it
comes with the news feed and its free and goes over the satellite, why not
try it?

>
> > 11. Caching only nameserver ran on squid host
>
> I'd definitely set that up if you haven't.
>
> Also, make sure you have your cache setup to restrict access. When I took
> over squid at work, I noticed we had people in Russia porn surfing through
> our squid cache. My guess is they're filtered from these sites and so
> they bounce through open squid servers to get to them.
>

Yeah, my ACL's are wild as hell. If it weren't for the acl's my
squid.conf would be small. Even without the ACL's users that aren't our
aren't going to get to that box, since the access-lists on the border
router block incoming port 80 accept to our web server.

Brian

> ----don't waste your cpu, crack rc5...www.distributed.net team enzo---
> Jon Lewis *jlewis@lewis.org*| Spammers will be winnuked or
> System Administrator | nestea'd...whatever it takes
> Atlantic Net | to get the job done.
> _________http://www.lewis.org/~jlewis/pgp for PGP public key__________
>

-----------------------------------------------------
Brian Feeny (BF304) signal@shreve.net
318-222-2638 x 109 http://www.shreve.net/~signal
Network Administrator ShreveNet Inc. (ASN 11881)
Received on Sat Jun 26 1999 - 11:05:58 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:47:00 MST