Re: MemPools rewrite

From: Henrik Nordstrom <hno@dont-contact.us>
Date: Thu, 19 Oct 2000 21:45:20 +0200

Andres Kroonmaa wrote:

> If we need to preallocate a bunch every time to grow the pool, then why
> do that one object at a time, instead of allocating a chunk and splitting
> it internally? Why incur 16-byte overhead of libmalloc for each item of
> size int that is used in thousands? Squid has amasingly many allocations
> of size 4,8 bytes (even 1,2).

Because we might want to be able to maintain a highwater mark on the
amount of idle memory. Doing that in chunked allocations is not trivial.

> For this we'd need to create all pools at the same time, and only once.
> After running for awhile system memory will be fragmented by zillions
> requests for URL strdups and frees, and next time you preallocate at a
> time, you won't get any similarity to chunked allocation. Individual
> object proximity will be absolutely random.

Here I disagree. Sure there will be some fragmentation, but not a
zillion. With the proposed pattern malloc should be quite effective in
limiting the fragmentation by self organising.

What I forgot to say was that memory should be freed in bunches as well,
where several entries are freed at one, not one at a time.

> Also, freeing from pool tail is only adding fuel to the fragmentation.

Again I disagree. What I meant with the tail here is the tail in the
sorted list of idle allocations, so high memory gets freed before low
memory and low allocations gets used before high ones.

> I'm not talking about fragmentation in an hour or few of testloads, but
> of over weeks of operation under loads of 15K mallocs/frees per second.

Neigher am I.

> I don't believe that preallocation in bulk with current memPools will
> solve fragmentation.

In the production servers I have had fragmentation hasn't actually been
a big issue. Most of the allocated memory has actually been in use, even
if not all is accounted for in memory pools.

> Basically, we might not reach ideal solution with chunked pools also,
> but Pool fragmentation and overhead can be accounted for in memory util
> page. I think this alone is already a useful feature. Currently we
> have some 50% of squid size in some shadow, unaccounted area.

Yes, but is it free (fragmented memory) or is it in use by Squid for
uncounted structures? The mallinfo stats should tell.

/Henrik
Received on Thu Oct 19 2000 - 13:47:25 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:12:51 MST