Re: Squid performance wish-list

From: Stewart Forster <slf@dont-contact.us>
Date: Tue, 25 Aug 1998 08:47:42 +1000

> File deletion is a steady background process, not exactly high-priority
> or highly latency dependent.

WRONG! Squid deletes as many files as it adds. Since you're also arguing
for v.large files here, and squid does throw in the odd large file, we're
talking a disk access for every single block that needs to be deleted.
Disk accesses are EXPENSIVE. Trivialising the importance of file deletion
is the first step to having a filesystem whose performance sucks.

Look at it this way, if you had the option of a filesystem that could remove
all files under 512K in 1 disk access (ignoring bitmap updates) vs. one that
took up to (512K / 512) 1000 disk accesses, I know which one I'd be picking.
Lets look at files under 16M and squid puts out a few of those too. The
direct/indirect system would deallocate those blocks in 2 disk accesses, vs
16M / 512 = 33768 disk accesses. Going further the direct/indirect system
requires 1 more disk access per 16M of file size. The linked list system
requires 32768 disk accesses per extra 16M. Are these numbers big enough
for you to see why linked lists of blocks are bad?

Look, what you're proposing is a filesystem which is basically the same as
what I've proposed, but you've split out the block pointers across the whole
file. Kevin and I have explained why this is bad. It is no more complicated
to code either up. Listen to reason.

        Stew.

-- 
Stewart Forster (Snr. Development Engineer)
connect.com.au pty ltd, Level 9, 114 Albert Rd, Sth Melbourne, VIC 3205, Aust.
Email: slf@connect.com.au   Phone: +61 3 9251-3684   Fax: +61 3 9251-3666
Received on Tue Jul 29 2003 - 13:15:52 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:11:52 MST