Re: [squid-users] Re: Squid 1.2 formats and other Q's.

From: Chris Tilbury <>
Date: Tue, 15 Sep 1998 09:14:58 +0100

On Tue, Sep 15, 1998 at 08:24:49AM +0200, Henrik Nordstrom wrote:

> Chris Tilbury wrote:
> > This is, of course, something that isn't addressed by either a
> > transaction log or a stripe. You'd need to make a logged, mirrored,
> > stripe. This is going to be quite expensive in terms of hardware.
> I don't agree fully here. Cache filesystems are different from normal
> filesystem. If one disk fails it does not matter much loosing the
> contents of that disk, as long as the contents of the other disks are
> not affected. Especially if the gain from not using mirroring is the
> double number of available disks spindles (and space) for the same
> price.

I don't think we're actually disagreeing here. There is only one way to get
resilience into a RAID0 striped filesystem, and that is to mirror it (RAID
10, 1+0, call it what you will). If you lose one component of a stripe, you
lose the filesystem. There's no way around this, it's a fact :-).

You're quite right in that if you are using each disk as a separate entity,
then if you lose one then you can still preserve the remainder of the cache,
though. There would certainly be no point to mirroring individual disks in
this scenario.

You still don't address write performance doing this, though. You'll still
have synchronous writes taking place (metadata updates, etc), which will
cause a performance degradation. Using a transaction log (or NVRAM if you
are very rich) _will_ help this. Sure, you can often turn many of these off
by disabling synchronous writes of metadata, but that can have adverse side
effects too.

(There's nothing to stop you using a transaction log on a filesystem, on a
 single disk by the way, at least not with DiskSuite. It could be a shared
 log, split between many different filesystems on different disks, too).

> The same applies to if a machine crashes & reboots. It does not matter
> if you loose some of the cached files.

True, but it does matter if the machine can't properly restart because it
cannot clean the cache filesystem(s). This is the risk you increase vastly
if you disable synchronous metadata updates to get the performance increase.
It could mean the difference between the cache coming back to life of its
own accord, or sitting at a single user login prompt waiting for someone to
come and fsck it. I know which I'd rather happened.

> Right now there is work going on for optimising Squid for very high
> loads. Keep your eyes open for changes in 1.2.
> Areas beeing looked into:
> * How to handle a (to) fast network
> * How to balance the load on many disks
> * I/O operation pattern used by Squid.
> And there is also work being made on building a custom filesystem model
> suitable for caching, to get rid of much of the overhead assosiated with
> a standard filesystem. Unfortanetly only performance are looked at, not
> so much crash recovery.

I think the latter certainly needs to have at least token attention paid to
it. Whilst issues related to crash recovery might not be so important on
simple cache "appliance" type systems, on a cache running on a general
purpose operating system with many more "moving parts", it's of far more

Perhaps some of the work done with INN2 on CNFS could be of value here, as
news seems to at least superficially similar in its use of filesystems for
object storage (large numbers of directories, with clusters of generally
small files).



Chris Tilbury, UNIX Systems Administrator, IT Services, University of Warwick
EMAIL: PHONE: +44 1203 523365(V)/+44 1203 523267(F)
Received on Tue Sep 15 1998 - 01:17:19 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:42:01 MST