Re: [squid-users] Ramdisks

From: Henrik Nordstrom <hno@dont-contact.us>
Date: Thu, 22 Nov 2001 14:10:11 +0100

I think you may have misunderstood me. I am trying to show that the technique
is valid and outperforms most else there can be found for a cache type
workload.

On Thursday 22 November 2001 12.19, Andres Kroonmaa wrote:

> I think you are too conservative here. Disk size and sequential speed are
> well related and growing both. The technique you name as hack is actually
> very well suited to log-FS (basically copying any read-accessed data to
> the write buffers and eventually writing them out again together with
> other write data, at FIFO tail, reaching pure LRU), and has almost zero
> performance impact. Disk IO performance is limited by seek/rotational
> latency mostly, and copy of cache-hits only increases write sizes about
> 20-30% (byte hit ratio). With log-FS disk-io performance is limited by
> random reads, not writes. And this difference is pretty large and growing.
> So this is imo very decent approach and not a hack at all.

Exacly. Only because I say something is a hack does not mean that it is ugly
or negative. Hacks can be beautiful and perfect ;-) (I would not call myself
an Hacker otherwise)

The reason to why I call it a hack is because to most people it is unitiutive
that you do can do more I/O to reduce the total amount of I/O. More so when
discussing writes, as most people can easily understand how read-prefetch
works and why it can provide a benefit.

> Major idea of FIFO-like FS-es is to make sure writes are sequential to
> disks always. Reverting to random writes to fill expired spots defeats
> that performance benefit. Trying to be more smart in disk allocation
> and keeping write chunks large forces us to make compromises between
> fragmentation, removal policy hitrate and performance penalty.
> At a cost of disk space we could do alot, but I don't believe that
> increasing disk sizes are the cure. Its never been that consumer disks
> are left unfilled, and I don't think this is gonna change any soon ;)

My point Exactly.

Will only change when solid state outperforms magnetic media to get rid of
the mechanics, which is not likely anytime soon. Electronics get better and
better, but mechanics are much more constrained.

> I would probably resist that idea. 80% of requests are small. Forcing
> COSS to deal with large objects we clutter it with needless complexity
> that hits the performance eventually, and probably also at a cost of
> higher ram usage. For large objects better use FS thats best suited for
> large objects or at least adds least cpu overhead. I'd rather welcome
> another COSS-liks fs implementation optimised for large files if there
> is a need for such approach. Highest req/rate path should be very
> optimised.

True. However, I don't think the complexity increase needs to be that large,
or actually have any impact on non-large objects.. For the cyclic filesystem
design I see no major problem dealing with large objects except for the
pollution of aborted transfers, and by implementing partial object storage
such pollution is minimized.

> I feel it's so common that people try to reach a universal FS that is
> optimal for all cases. IMO we shouldn't even attempt that.

I think it is at least worth to consider the ideas, and to understand the
impact before rejecting it.

> Definitely, definitely. Logical removal policy for small objects is
> LRU or LFU, optimising object hit-ratio, for large objects you would
> prefer optimising byte hit-ratio. Imagine tape-drive robot and some
> huuuge files ;)

Exacly. The problem with large/small objects is mainly a policy question, not
that much technical at the filesystem layer.

Regards
Henrik
Received on Thu Nov 22 2001 - 06:09:41 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:14:39 MST