Re: Can I *not* have an on-disk cache?

From: Clifton Royston <cliftonr@dont-contact.us>
Date: Tue, 13 Jul 1999 08:17:39 -1000 (HST)

Steve Willer writes:
> On Tue, 13 Jul 1999, Scott Hess wrote:
>
> > At worst, put the cache on a ramdisk...
>
> Well, it's an interesting idea, but currently the kernel is my bottleneck.
> Not the disk. Putting the files in ramdisk still involves system calls,
> path parsing, etc. It would probably be a bit better, but I was really
> hoping for a way to avoid system calls entirely in this case.

I'd almost be prepared to bet you're wrong on this. Synchronizing
blocks to the disk, and the disk writes associated, and the
calculations associated, are very likely dominating over the system
calls. Remember also that when you use a UNIX ram disk (mfs) you're
going through a completely different set of low-level file system code,
which *knows* it doesn't need to deal with all that junk.

> Small rant: I've been a bit frustrated over the apparently inflexibility
> in some portions of Squid. Why is it that we _must_ have an access_log,
> for example? I could write to /dev/null, but Squid is still going to build
> the log line in its buffer and make the kernel calls to output to the log.
> Also, why is it necessary that I have an on-disk cache? Surely there are
> others who are caching very small amounts of data but for whom performance
> is critical...what about us?

It is, after all, as the authors have pointed out, a free product
designed primarily for research, even if lots of people are using it to
do serious work. If you really want to get into it, you could always
#ifdef the access log code in the source, or add a special-case check
for "none" as done with store.log.

Or optimize the algorithms for main RAM storage - I admit I'm still
shaken by the revelation that the code for Squid will keep cached in
main RAM is entirely and admittedly sub-optimal. I think that explains
a lot of performance bottlenecks there.

> That's true, although truthfully that machine is swimming in RAM. It has
> 512MB RAM, and it's basically just running Squid. I have Squid configured
> for something small (either 256MB or 128MB, can't remember), and the rest
> is just going to the kernel.

From the recent discussions, it sounds like you would do best to
convert most of that to enough RAM disk to hold the cacheable data, as
Squid will not currently make effective use of large pools of main RAM.

If you wanted to get fancy, you could even add some scripts to copy the
RAM disk off to a disk partition at Squid shutdown, and back at boot
time.
  -- Clifton

-- 
 Clifton Royston  --  LavaNet Systems Architect --  cliftonr@lava.net
        "An absolute monarch would be absolutely wise and good.  
           But no man is strong enough to have no interest.  
             Therefore the best king would be Pure Chance.  
              It is Pure Chance that rules the Universe; 
          therefore, and only therefore, life is good." - AC
Received on Tue Jul 13 1999 - 12:09:46 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:47:22 MST