Re: [squid-users] "Quadruple" memory usage with squid

From: Amos Jeffries <squid3_at_treenet.co.nz>
Date: Sun, 13 Dec 2009 00:28:22 +1300

Kinkie wrote:
>> Though there is a potential loss of disk IO efficiency when alignments of
>> the data portion are not on the 4096/8192 disk chunk borders. The few dozen
>> bytes of overhead difference in the disk and memory nodes are annoying.
>
> A possibility is to detach the data from the data structures needed to
> access them, in much the same ways filesystems do.
> But this is better tested out first in a specialized environment (e.g.
> in a mem-based cache_dir implementation).
>

That pretty much what I was thinking. Taking on the inode model. Where N
mem_node_meta objects are allocated in a sequential page slab, which
would be allocated and swapped in/out as a chunk anyway. That maintains
locality of the meta data, which same as now contains a ptr to page of data.

This needs a bit more thought and figuring why mem_node has a fixed data
buffer of its own AND a StoreIOBuffer pointing at a dynamic amount of
MemBuf. I have not looked at this past seeing the fields of mem_node and
having the idea that it holds the in-memory object in a sequence of data
pages.

Rough ideas on top of that:

  Going by the size of mem_node and StoreIOBuffer it looks like we can
squeeze each meta slab into 256 bytes. Or with padding can have 2-3 per
KB easily.

I'm thinking now of an average object <16KB taking 3x mem_node_meta and
3x dynamically allocated pages for data 2x 4KB and the third 8KB aligned.

So a minimum object takes 1x4KB pages as now, average object takes 4x4KB
data pages and 256B-1KB of meta. Larger objects get another
mem_node_meta slab and so the sequence repeats.

Amos

-- 
Please be using
   Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20
   Current Beta Squid 3.1.0.15
Received on Sat Dec 12 2009 - 11:28:37 MST

This archive was generated by hypermail 2.2.0 : Sun Dec 13 2009 - 12:00:03 MST