Re: Number of dnsserver processes?

From: Clifton Royston <cliftonr@dont-contact.us>
Date: Wed, 18 Aug 1999 09:17:19 -1000 (HST)

Williams Jon writes:
> Several people have asked about the setup, so I figured I'd post back to the
> list instead of responding to each individually.

I, for one, much appreciate that.
 
> We overbuilt the hell out of the system. The proxy that I wrote the
> question on is a pair of Sun Ultra Enterprise 3000 boxes, one with three
> 248mhz CPUs and 1GB of RAM, and one with two 168mhz CPUs and 512 MB of RAM.
> Both have 7 4GB Fast-Wide Differential SCSI disks dedicated to cache.

Any guesstimate on the % bytes hit rate you're getting, with that
amount of disk per server?

> The cache disks are configured as a logging filesystem using the Solstice
> Disk Suite with the log being recorded on a mirrored pair of 100 MB
> partitions on disks which are unused except for the logging functions. The
> physical disks themselves are actually in a shared EMC disk array which has
> got something like 1GB of RAM for caching and are configured as mirrors in
> the backend.

[drool...] I really wish I could afford the EMC equipment for our
other servers - not sure I'd use it on a cache server, but for shared
storage it looks very very nice.

> All of this sits behind a pair of Cisco Local Directors in failover mode
> which uses the Least Connections pragma for routing traffic in and out. All
> of the network connections are 100MB full duplex Ethernet, and we've got a
> minimum bandwidth potential of 12MB/sec. (two sets of lines from seperate
> vendors using BGP for availability).

This is very helpful, since that's close to our bandwidth.

> I wouldn't recommend doing this unless you have users with very, very low
> thresholds for downtime. It is a large investment in hardware and takes a
> full person just to run/tweak/troubleshoot. The Local Directors introduce
> some problems with websites that do session tracking and/or security based
> on the perceived client IP address, since the sessions shift back and forth
> between the two servers.

You might press Cisco to implement "sticky" connections (or check if
it's buried in their docs somewhere?) - the switches we're preparing to
use have that as a configurable option, where they will attempt to
consistently route sessions from a given client to the same cache
server or load-balanced server within a time threshold you set (e.g. 5
minutes, 15 minutes.) That would help with the problem you describe.

Some products also have an option for attempting to partition the URL
space between the cache servers based on a hash of the destination
host; that would help too, and might achieve more effective use of your
cache disk by reducing storage duplication across the two servers.

Any other "gotchas" you'd care to share with us on implementing caching
on the grand scale?
  -- Clifton

-- 
 Clifton Royston  --  LavaNet Systems Architect --  cliftonr@lava.net
        "An absolute monarch would be absolutely wise and good.  
           But no man is strong enough to have no interest.  
             Therefore the best king would be Pure Chance.  
              It is Pure Chance that rules the Universe; 
          therefore, and only therefore, life is good." - AC
Received on Wed Aug 18 1999 - 13:07:00 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:48:03 MST