Re: Do not make a compatible squid file system

From: <adrian@dont-contact.us>
Date: Mon, 7 Feb 2000 01:17:32 +0800

On Sat, Feb 05, 2000, Henrik Nordstrom wrote:

> > - typical write case is 1 seek/write per 100 objects (assuming membuffer
> > size of 1MB and 10k average object size).
>
> This worries me a little. There is an issue of starvation for readers
> there. You probably want to tickle out the write data in chunks sized to
> match your read latency requirements. This also applies to large hits
> where you do not want one large hit to have a too large impact on the
> latency of other concurrent hits. Sometimes a few seeks more helps to
> improve the over all performance.

Possibly. ONe of the things I want to do after I finish the per-swapdir
heap replacement/list code is to integrate COSS into the tree and then
play with writing an async COSS around the guidelines on your homepage.

I want to profile the advantage of having multiple (lots of multiple)
spindles running COSS under real live traffic flows, and see what
performance I get. I'm stabbing here that if you take into the account
browser behaivour (request html object, then associated gifs/jpgs/
etc on the same page), you can get a sort of cheap mans locality
as a function of request throughput, number of simultaneous clients,
and the size of reads/writes that you do.

> For a large Squid a few MB of disk buffers won't matter much, especially
> not if the OS can be told to reduce it's buffering of the cache I/O.

Hrm. I still want to write a patch that tries to keep the beginning of
large objects in memory .. now that I have a few weeks of copious
spare time I should start cranking out patches to my modio
sourceforge branch.

Adrian
Received on Sun Feb 06 2000 - 10:19:04 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:12:21 MST