Re: Re: Swap DIsks- but not what you think!

From: <>
Date: Sat, 23 Oct 1999 10:09:33 +1000

Thanks a lot for that Clifton & thank you Andreas Kaeser for your response too,

As another thought, I have seen many people asking about how much memory Squid
uses to which the answer is the proverbial "how long is a piece of string?" ie.
depends on your circumstance. And I understand fully. But it does make things
much harder for novices & IT people making proposals for hardware setups in
general (especially because RAM is so expensive at the moment!). I
myself got caught a few months ago. After 6 months of merrily running Squid on
a box with 128 mb RAM & 8 gb of cache & using swap space (I didn't think it
slowed things down - boy was I wrong! - it is all relative, isn't it?), I
thought "Cool, I'll buy this ridiculously cheap 17 gb drive & increase my byte
hit ratio". Of course there are no points to anyone for guessing what happened!
I've learned my lesson but I digress.

How hard would it be to make Squid intelligent enough to dynamically change its
operating variables for optimum performance to suit its environment & perhaps
shove in a couple of accessible meters (or something with the smae functionality
like in cachemgr)? Impossible? Well maybe if everyone who could see this would
be a great benefit would chip in a dollar a day (or whatever) & hire a top-gun
programmer to do it (just a flight of fancy!). I mean processor time couldn't
be an issue could it? Running Squid on a Pentium 200 doesn't bother the chip at
all really (1-7% on my box).


>From: Clifton Royston <>
>Subject: Re: Swap DIsks- but not what you think!
>Date: Sat, 23 Oct 1999 5:04 AM

> On Sat, Oct 23, 1999 at 12:07:21AM +1000, wrote:
>> On a Mac, I can switch what Apple call "Virtual Memory" (& what every
>> one else calls swap disks) off altogether & the Mac goes a lot
>> quicker. Yes, it runs out of memory, but you allocate memory
>> manually so if you're running servers you don't have a problem!
> First word of reassurance: the Mac (and Windoze) implementations of
> virtual memory via swap file are many times worse than the UNIX
> implementations, for no excusable reason.
> <Tirade>The various ways to implement virtual memory via swap files
> had been worked out exhaustively by the mid-'70s at latest, and
> certainly in the '80s when those OSes were written, there was no excuse
> for any OS implementing it poorly!</Tirade>
> In any UNIX variant I know of, there is zero penalty associated with
> just having swap. The only performance hit is when a process actually
> gets moved out of RAM to the swap area. This will happen to your
> servers only when and if some other process needs to take RAM in use by
> your server software, e.g. if you run some memory-intensive command
> from the shell on the same machine.
>> I know you can use the command "swapoff -a" with Linux, but I have a vague
>> memory of there being a problem with that.
> There would definitely be a problem if some process suddenly becomes
> a memory hog; it would probably mean that you can't start other
> processes due to lack of memory, possibly including the root shell or
> the "kill" process you'd need to shut down the problem child. As
> noted, this would also mean that inactive but still-resident processes
> will always take up RAM instead of being shuffled off to disk. I don't
> know what Linux-specific problems there might be.
>> Is there a way that Squid (& other Linux/UNIX) programs can be
>> convinced that they just cannot have more RAM than is available
>> without spitting the dummy such that you can use swapoff without
>> fear? If not, would it be possible to build something of the kind in
>> future versions of Squid? I'm sure a lot of people would love to
>> know the answer to this...
> If it were implemented, the "cure" might be worse than the problem:
> if Squid needs more memory to service a request, is it better for it to
> increase its memory footprint (possibly slowing down) or to just stop
> taking requests altogether? You can limit the amount dedicated to
> in-memory caching, but most of the memory use will scale with the
> number of simultaneous connections and transactions, and with the size
> of the disk cache.
> Next word of reassurance: Your best bet is to leave swap enabled, and
> simply watch the total memory usage of Squid (and its DNS servers,
> don't forget!) with ps or top, and make sure it's not growing bigger
> than your available memory. Alternatively, watch swap use with "pstat
> -s" (or the Linux equivalent) and see that it either stays zero, or
> grows to a small amount and then grows no further. Our main server:
> % pstat -s
> Device name 1K-blocks Type
> sd0b 525308 Interleaved
> 0 (1K-blocks) allocated out of 525308 (1K-blocks) total, 0% in use
> Just for a final point, this past week I got Squid running (mostly
> for laughs, but with some real use planned) on one of my home systems
> which is an old 486 running OpenBSD with 16MB of real RAM. Performance
> isn't great, but it works!
> % pstat -s
> Device 512-blocks Used Avail Capacity Type
> /dev/wd0b 132048 25616 106432 19% Interleaved
> % top
> load averages: 0.13, 0.10, 0.08 08:58:02
> 22 processes: 1 running, 21 idle
> CPU states: 0.3% user, 0.0% nice, 0.2% system, 0.5% interrupt, 99.1% idle
> Memory: Real: 5268K/10M act/tot Free: 384K Swap: 13M/64M used/tot
> -- Clifton
> --
> Clifton Royston -- LavaNet Systems Architect --
> "An absolute monarch would be absolutely wise and good.
> But no man is strong enough to have no interest.
> Therefore the best king would be Pure Chance.
> It is Pure Chance that rules the Universe;
> therefore, and only therefore, life is good." - AC
Received on Fri Oct 22 1999 - 18:12:04 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:49:02 MST