Re: server config advice?

From: Oskar Pearson <>
Date: Tue, 19 Aug 1997 20:00:51 +0200


> We will have a 512Mb ram linux basedsystem with 32Gb of cache disk. We
> have 4Gb GB Barracuda fast-wide cache drives at present, is fast wide
> the go or are we better off using other types of drives? We are
> intending to run these on a RAID controller splitting 4 cache drives per
> SCSI channel ( the RAID is being used to provide 3 channels - 1 for
> system boot(RAID 5) and 2 for cache disks running RAID 0). Any other
> considerations given we are expecting approximately 2 million plus
> hits/day?
hmm - I presume that you mean http requests, no ICP.

definite must: filehandle patch - (I will issue an
update for this soon, but it works fine for me and lots others)

What cpu are you using?
raid0 is probably a good idea if you have enough CPU, especially if you use the
correct mke2fs options - here is some info I got from a mailing list
(may not be applicable to squid, considering the maximum object size):
From: "Leonard N. Zubkoff" <>

  RAID-0 is stable. I would recommend testing an ext2 filesystem with
  a 4kB block size, rather than the default 1kB, and with RAID chunk sizes
  of 4kB to 32kB.

I definitely concur that the 4KB block size is critical to getting maximum
sequential I/O performance, even for a single SCSI disk. You should also be
aware of the -R option to mke2fs 1.10. It is designed to inform mke2fs of the
stripe width so that the file system metadata (block and inode bitmaps) does
not all end up on a single disk, thereby creating an unbalanced I/O load. Ted
put this is in response to my noticing the problem about unbalanced I/O and
complaining to him about it. You also want to use mke2fs 1.10 because for 4KB
file systems it will default to 32768 blocks per group rather than the 8192
blocks per group earlier versions allowed. For example, to build a 4KB file
system with a raid chunk size of 64KB, you want to use the command

    mke2fs -b 4096 -R stride=16

The stride=16 parameter informs mke2fs that the stripe width (chunk size) is
64KB = 4KB * 16.

I've been doing some experiments on I/O performance using two or three striped
Quantum Atlas II Wide 2GB drives. If there is interest in the results, I'll be
happy to share them here. I'm testing both with 2.0.30 and using a modified
fs/buffer.c from 2.0.29. There have been so many proposed patches for 2.0.31
that I don't know where to begin, so I want to see first if there's much
difference between these two versions.

Look for a file called md-0.35.tar.gz on your nearest sunsite mirror.

upgrade to latest libc (we run or later) otherwise there
are slight problems with the fd-patch

compile the kernel with Syn-cookies protection

a few more words:
echoping will tell you if your server gets loaded
        changes later are VERY difficult)
Turn on/off the NO_ATIME option in the kernel. This should
        speed stuff up.

Think about a cluster of slightly lower-end machines... I found
that on our system when the load average reached 1 squid slowed down
fairly badly. (http://cache/squid/opt/performance.html - it seems
that the URL
doesn't really apply to linux that well). SMP may help but not
much, though it will leave squid to happily do it's thing while
the kernel runs on another CPU.

Hardware wise, we have been able to get 7Megs per second (not mbits/s!)
(using bonnie) from a raid0 disk on a P166 talking to 2 seperate barracuda
drives across a Adaptec 2940 split-scsi bus.

Received on Tue Aug 19 1997 - 11:06:05 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:36:47 MST