Re: Do not make a compatible squid file system

From: Henrik Nordstrom <hno@dont-contact.us>
Date: Sat, 05 Feb 2000 22:37:08 +0100

Eric Stern wrote:

> I've combated this in COSS two ways. First of all, writes are combined.
> COSS keeps large memory buffers (~1MB). As new objects are added to the
> store (or hits moved to the front), they are added to a membuffer. When
> the membuffer overflows, it is written to disk and a new membuffer
> created. Thus, we can conclude that each 100 hits will only result in 1
> extra disk write, which isn't too bad. (This is already implemented). We can
> combat this further by deciding that a hit object will only be moved to
> the front of the store if it is within the "top" 50% of the store. We
> assume that if it is within the top 50%, its not in immediate danger of
> being overwritten and we can safely leave it there for a while. (This part
> hasn't been done yet.)

Sounds very much like you have read my discussion on Squid-dev some
(several) months back.. archive of the discussion thread available from
my Squid pages.

> Honestly, I can't imagine anything that could be more effecient than COSS.

That is a brave statement, especially considering that different people
put different meaning into the word efficient. We do however seem to be
aligned along the same lines here. It looks like you have in fact
implemented what I have been planning on doing, or at least very close
to. ;-)

> - typical write case is 1 seek/write per 100 objects (assuming membuffer
> size of 1MB and 10k average object size).

This worries me a little. There is an issue of starvation for readers
there. You probably want to tickle out the write data in chunks sized to
match your read latency requirements. This also applies to large hits
where you do not want one large hit to have a too large impact on the
latency of other concurrent hits. Sometimes a few seeks more helps to
improve the over all performance.

> UFS:
> - number of seeks: 25000
> - number of reads: 12000
> - number of writes:13000
>
> COSS:
> - number of seeks: 4600
> - number of reads: 4000
> - number of writes: 600

Looks like what I would expect, and by using a good hot object cache you
should be able to reduce the reads further (if you can afford the memory
required).

> Yes, COSS does use more memory. Its a tradeoff, and I am more interested
> in obtaining performance than saving memory. Its not that bad anyways, in
> some informal tests with polygraph to compare UFS vs COSS, the squid
> process hovered near 15MB with UFS, vs 22 MB with COSS.

You didn't have much cache did you? Or have you changed your version of
Squid not to use a in-memory index of the cache content?

For a large Squid a few MB of disk buffers won't matter much, especially
not if the OS can be told to reduce it's buffering of the cache I/O.

--
Henrik Nordstrom
http://squid.sourceforge.net/hno/
Received on Sat Feb 05 2000 - 15:12:50 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:12:21 MST