Re: chunked MemPool

From: Andres Kroonmaa <andre@dont-contact.us>
Date: Tue, 17 Apr 2001 10:59:39 +0200

On 12 Apr 2001, at 22:03, Henrik Nordstrom <hno@hem.passagen.se> wrote:

> Andres Kroonmaa wrote:
>
> > I'm wondering how portable memalign() is?
>
> Not very. It is not covered by "the Single Unix Specification Version
> 2".
>
> But regarless, quite efficient searching can be implemented using trees
> or hashes (trees is probably more suited as we do not know the amount or
> distribution).

 I'm still not sure if it is worth it. It could be worth only for pools
 that have very many chunks to search. I have no idea where could the
 threshold be between adding overhead and becoming more efficient.
 But for sure with a pool with less than some 4-8 chunks searching via
 trees or hashes would only add overhead. And that is most of the time.

 So, to be most efficient, we'd need to decide based on number of chunks
 whether to use trees or plain linklist search.

 There are only very few pools that would benefit from trees - mainly
 storeEntry, MD5, and LRU pools. And these are generally very longlived
 object, being freed quite rarely, so overhead of plain search is quite
 small on average.

> I would suggest using a tree with about 5 branches per internal node.

 In squid we have only splay? Did you mean that? I'm not like writing
 any tree code myself.

> Or you could use garbage collection to hide the overhead from normal
> operations. Simply put the free nodes in a bucket, and only look in
> detail if you need to actually reclaim memory.

 basically cache of frees. I was thinking on that path. Only we'll
 loose stats per chunk.

------------------------------------
 Andres Kroonmaa <andre@online.ee>
 CTO, Delfi Online
 Tel: 6501 731, Fax: 6501 708
 Pärnu mnt. 158, Tallinn,
 11317 Estonia
Received on Tue Apr 17 2001 - 03:04:51 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:13:47 MST