Re: [PATCH] Do not release entries that should be kept in local memory cache.

From: Alex Rousskov <rousskov_at_measurement-factory.com>
Date: Wed, 04 Jul 2012 23:12:55 -0600

On 07/04/2012 07:31 PM, Dmitry Kurochkin wrote:

> Since r11969, Squid calls trimMemory() for all entries, including
> non-swappable, to release unused MemObjects memory. But it should not
> release objects that are or should be stored in local memory cache.

Agreed, at least for now. I suspect the "We must not let go any data for
IN_MEMORY objects" no longer applies because the trimMemory() method is
not called for IN_MEMORY objects any more, but I doubt we should
investigate that right now, even if we can relax or simplify that
condition under some circumstances.

> The patch adds StoreEntry::keepInLocalMemory() method to check if an
> entry should be stored in local memory cache. If it is true, do not
> release entry in trimMemory().

That does not feel right, on several levels:

A1) A store entry itself cannot tell whether it will be eventually kept
in some cache. That [sometimes complex] decision is the responsibility
of the cache, not the entry. The moved decision code for the non-shared
memory cache is now misplaced.

This problem is easy to fix. Just have StoreEntry::trimMemory() ask the
new boolean StoreController::keepInLocalMemory(e) whether the Store
wants to keep the given [busy or idle] entry whole. If not, the trimming
code can proceed. The old StoreController::handleIdleEntry() will call
the same method when memStore is nil.

However, perhaps we should decide how to handle (A2) and (B2) below
before we fix the minor "design bugs" of (A1) and (B1).

A2) Will your changes have much effect in non-SMP mode after the memory
cache is completely filled? The (mem_node::InUseCount() <=
store_pages_max) condition will be very often false when that happens,
right? Thus, many/most entries will continue to get trimmed as they are
getting trimmed now. Do you agree?

B1) These changes appear to ignore the shared memory cache which has a
different decision logic when it comes to keeping an entry in the cache.
See MemStore::considerKeeping().

To fix this, we can copy (or move?) some of the checks from
MemStore::considerKeeping() to the new MemStore::keepInLocalMemory(e)
that StoreController::keepInLocalMemory(e) will call, similar to how
current StoreController::handleIdleEntry() is organized. Which checks to
copy or move? I am thinking about the ones that do not depend on whether
the entry is fully loaded, but others might make sense to.

B2) Should there be more trimming checks, in addition to the ones in
MemStore::considerKeeping()? I think so. When we start reading the entry
from the server it is "in transit". It is not in the memory cache yet.
We cannot keep all cachable "in transit" entries in RAM. There has to be
a limit or we may run out of RAM. However, I do not see where we can get
that limit.

Should the shared memory cache keep a [shared] total size of these
in-transit cachable entries and start trimming them if they are not
going to fit? This will create a kind of "reservation" system for
in-transit cachable entries. However, the entry size may be
changing/growing so we do not know how much to reserve in general. We
can cheat today because the current shared memory cache uses fixed-size
slots (but we will have to solve this problem later).

How did the old (before SMP changes) code solved that problem? What
prevented it from creating too-many in-transit cachable entries, keeping
them all in RAM, and eventually running out of RAM if the conditions are
right? Nothing??

If that is the case, then I suggest that we use the same logic and
prevent trimming of cachable in-transit objects regardless of the
current RAM usage state. The keepInLocalMemory(e) methods will ignore
the current memory cache size and capacity then, solving (A2) and (B2):
If it might be cached later in RAM, it should not be trimmed now, even
if the memory cache is full.

If we do that, we should add some source code comments to document this
understanding/assumption though.

Thank you,

Alex.
Received on Thu Jul 05 2012 - 05:13:01 MDT

This archive was generated by hypermail 2.2.0 : Thu Jul 05 2012 - 12:00:03 MDT