Re: Chunked mempools, a first verdict

From: Alex Rousskov <rousskov@dont-contact.us>
Date: Thu, 3 May 2001 09:53:32 -0600 (MDT)

On Thu, 3 May 2001, Andres Kroonmaa wrote:

> Generally, probably yes. In this case, feeding NULL pointer in error
> is not dangerous.

It will not lead to a coredump, but it will consume more CPU cycles
than the caller wanted it to. Personally, I do not see any _benefit_
of overloading the function with two semantically different actions.
To me, it is no different than another "switch" wrapper.
 
> There are 2 more issues to resolve: mempool lib is rudely importing
> time_t squid_curtime to get HWmarks updated with time and to keep
> track of chunk last reference time. Ideas welcome how to stop that.
> Drop HWmark times altogether?
>
> Another issue is added by myself. To get rate/sec for each pool
> allocations I'm saving alloc counter each time memReport is called.
> Saving it to MemPoolMeter struct itself, ie. into library's space.
> By all means this is dirty, but then again, it was quick ;)
> I personally find it useful to see in the cachemgr output what types
> of pools get most memory traffic. It returns allocs/sec since last
> memReport output. Your opinions?

How about calling a memPoolUpdateTime(time_t) function from Squid
every time time changes (or once in a while)?

I think reporting allocations rate is great.

I do not like "since last memReport output" approach because lots of
scripts and humans may ask Squid for that report. We cannot assume
that there is only one "user" of the report. For example, all Duane's
caches have scripts polling them for stats every 5 minutes or so;
if you happen to get stats shortly after that event, you counters will
be wrong.

I would suggest that allocation rate stats are maintained by memPool
library using the memPoolUpdateTime() approach above.

Alex.
Received on Thu May 03 2001 - 09:53:39 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:13:58 MST