Re: MemPools rewrite

From: Henrik Nordstrom <hno@dont-contact.us>
Date: Mon, 30 Oct 2000 23:34:32 +0100

Andres Kroonmaa wrote:

> > Well, there are still race conditions where the memobject can
> > temporarily grow huge, so don't bet on it.
>
> what race conditions should I keep in mind?

The first one that pops up in my mind is when there are two clients to
one object which as of yet is marked as cachable, and one of the client
have stalled or is a lot slower than the other. There has been a couple
of other bug related ones in earlier Squid versions, and there quite
likely is more to come..

> I've put together a version of mempools with chunked allocations.
> With 2.5M objects, and chunk size of 8K, I got very many chunks to handle.
> I thought I'd increase chunk size for some specific pools, like StoreEntry,
> MD5, heap_node, to 256K so that dlmalloc would place those onto mmaped area.
> It does this, but quite selectively. For some reason it seems that it tries
> to avoid using mmap, even for large allocations.

The default threshold for Linux glibc is apparently 128 KB.
  See glibc/malloc/malloc.c

> Now I wonder if it might be actually a bad idea to have too many mmapped
> allocations for some reason. Any comments on that one?

Only that there is a higher overhead in wasted memory due to page
alignment of the allocated size + malloc headers, and that there is a
considerably higher cost in tearing up/down a mmap() than rearranging a
internal pointer in the data segment..

/Henrik
Received on Mon Oct 30 2000 - 15:38:50 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:12:53 MST