Re: Chunked mempools, a second verdict

From: Henrik Nordstrom <hno@dont-contact.us>
Date: Sat, 05 May 2001 00:53:11 +0200

Andres Kroonmaa wrote:

> yup. I initially leaned to that too. But 2 problems.
> Alloc rate can range from 1/min to 500K/sec. At one extreme we'd have
> time lag of few hundred minutes, at other we'd have heavy overhead,
> most problem being time lag.

Ok. So lets keep the event then.. not a major thing. Having it
selfadjusting is nice, but if as you say this code is really used a lot
then it should be kept as simple as possible. But it also tells that
there is lots of room for optimization in the code paths.. a average of
125+ memory allocations per request is a lot of allocations.

> > Note: A higher precision time can be emulated from the sampled time
> > and mem_stat_counter.
>
> what do you mean, please explain?

By combining the sampled time and the counter you get full serialisation
of the events. You know that call 2 happened after call 1. From this you
can also deduce that some time passed between call 4 and call 30 even if
the time hasn't been sampled again. Main problem is to find the correct
scale/relation between the two parts of the emulated time.

But I still don't see the need for very detailed statistics here. If
there is not much activity then there is no need for averages as
counters works just as good, and if there is a lot of activity then
still anyone using these statistics is only interested in a rough
figure, not caring very much over how long time this is measured (within
reasonable limits, 5s to 30min)

In fact, plain counters are more than sufficient for rate statistics
(simply do delta/time measurements when collecting the statistics).
Anything extra is only bonuses and if there are issues in implementing a
bonus not crucial for the operation then it is better not to.

--
Henrik
Received on Fri May 04 2001 - 16:50:49 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:13:59 MST