Re: solaris 7 & squid

From: Jens-S. Voeckler <voeckler@dont-contact.us>
Date: Fri, 10 Dec 1999 11:17:49 +0100 ((MEZ) Mitteleuropäische Zeit)

On Fri, 10 Dec 1999, Pauline van Winsen wrote:

> > > the cache filesystems are mounted with the "noatime" option. /etc/vfstab
> > > entries for cache filesystems look like so:
> > > /dev/md/dsk/d3 /dev/md/rdsk/d3 /var/cache1 ufs 2 yes
> noatime
> >
> > I think you will find slightly better performance if you also enable
> > logging on your UFS filesystems, but I have not studied Solaris 7 UFS
> > logging in detail.. Comments on Solaris 7 logging UFS welcome.
>
> this isn't something i've looked at. squid is performing extremely well
> & i'm not inclined to "fix it" 8-)

UFS logging is (refer to docs.sun.com, if in doubt) speeding only the boot
process, especially after a dirty shutdown. Since I have heard of bad
experiences using the meta devices with squid (and/or innd), you may want
to determine, if you are not better off with squid doing the load
balancing on the pure (cXtYdZsW) disk partitions...

Anyway, I fiddled a little with my PCI-HW-RAID (the drives look to the
kernel like pure disks) and UFS logging:

1) Yes, for plain linear write, UFS logging seems to improve performance.
2) No, for linear writes with 10 % random reads, UFS logging is not
   makeing too much of a difference. This is what I estimated as squid
   behaviour with a fresh cache.
3) No, for random writes with 10 % random reads, UFS logging is actually
   slowing you down. This is a rough estimation for a squid runinng a few
   weeks (granted, a real squid would have a lot more reads).

The foundation drive was a HW-RAID0 via PCI (that is, the CPU is talking
to the RAID ctrl via PCI and *not* via SCSI) over 4 disks. No, this was
just a quick and dirty test to get a feeling for the configuration.

There is also a logging facility in the meta disk drivers. I haven't
looked on the performance of that, but you are aware that the MD logging
should go to a disk of its own and nothing else should be on it - even, if
you are only using 100 MB of a 4 GB disk, see Cockroft.

On the other extreme, if you can live with losing all you cache data once
in a while, you might want to give the fastfs ioctrl a look. In all three
categories mentioned above, it led to an improvement. But it is risky!

> > > in /etc/nscd.conf:
> > > suggested-size hosts 1001
> > > this figure was found after numerous looks at the output of 'nscd -g'.
> > > nscd hit rate is generally around 99%.
> >
> > Unless they have changed nscd to a multithreaded implementation in
> > Solaris 7, it will be a performance killer for DNS lookups.
>
> nscd is threaded in solaris 7.

I have heard rumours that it is still using one single (eye of the
needle) lock for name resolution, so all threads funnel through one
lock.
 
> > > there is also a caching bind-8.2.2-P5 named running on the host.
> > > some sources of info say this is not an ideal config for squid,
> > > but it's necessary in this situation to handle both internet &
> > > intranet dns lookups.
> >
> > Squid and named happily coexists on the same host, provied you have the
> > memory to host both. It is all a matter of resource control.
>
> yep.

Since you are running a bind, you may want to look into the bind
documentation: There should nscd interaction be mentioned in there. Due to
past experiences, when installing servers I *always* disable at least the
host lookup part of nscd (usually I disable it completely). Currently, you
are ending up caching host names in three different places, bind, nscd and
squid.

Le deagh dhùrachd,
Dipl.-Ing. Jens-S. Vöckler (voeckler@rvs.uni-hannover.de)
Institute for Computer Networks and Distributed Systems
University of Hanover, Germany; +49 511 762 4726
Received on Fri Dec 10 1999 - 03:34:21 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:49:49 MST