Re: [PATCH] Do not release entries that should be kept in local memory cache.

From: Dmitry Kurochkin <dmitry.kurochkin_at_measurement-factory.com>
Date: Thu, 05 Jul 2012 17:38:43 +0400

Hi Alex.

Alex Rousskov <rousskov_at_measurement-factory.com> writes:

> On 07/04/2012 07:31 PM, Dmitry Kurochkin wrote:
>
>> Since r11969, Squid calls trimMemory() for all entries, including
>> non-swappable, to release unused MemObjects memory. But it should not
>> release objects that are or should be stored in local memory cache.
>
> Agreed, at least for now. I suspect the "We must not let go any data for
> IN_MEMORY objects" no longer applies because the trimMemory() method is
> not called for IN_MEMORY objects any more, but I doubt we should
> investigate that right now, even if we can relax or simplify that
> condition under some circumstances.
>

Yes, it seems that IN_MEMORY check may no longer be needed. But I did
not investigate it. Removing the check is a separate issue.

>
>> The patch adds StoreEntry::keepInLocalMemory() method to check if an
>> entry should be stored in local memory cache. If it is true, do not
>> release entry in trimMemory().
>
> That does not feel right, on several levels:
>
> A1) A store entry itself cannot tell whether it will be eventually kept
> in some cache. That [sometimes complex] decision is the responsibility
> of the cache, not the entry. The moved decision code for the non-shared
> memory cache is now misplaced.
>
> This problem is easy to fix. Just have StoreEntry::trimMemory() ask the
> new boolean StoreController::keepInLocalMemory(e) whether the Store
> wants to keep the given [busy or idle] entry whole. If not, the trimming
> code can proceed. The old StoreController::handleIdleEntry() will call
> the same method when memStore is nil.
>

The intent was to do some easy checks. In the end we may still decide
not to store the entry in memory cache. Alas, turns out even these
simple checks are flawed because they ignore shared memory cache.

I agree that the code to make the decision is misplaced.

> However, perhaps we should decide how to handle (A2) and (B2) below
> before we fix the minor "design bugs" of (A1) and (B1).
>
>
> A2) Will your changes have much effect in non-SMP mode after the memory
> cache is completely filled? The (mem_node::InUseCount() <=
> store_pages_max) condition will be very often false when that happens,
> right? Thus, many/most entries will continue to get trimmed as they are
> getting trimmed now. Do you agree?
>

Yes.

>
> B1) These changes appear to ignore the shared memory cache which has a
> different decision logic when it comes to keeping an entry in the cache.
> See MemStore::considerKeeping().
>

Agreed. Shared memory cache is broken as well (because it is trimmed by
the time we call considerKeeping). The patch accidentally fixes it in
cases where local memory is not full.

> To fix this, we can copy (or move?) some of the checks from
> MemStore::considerKeeping() to the new MemStore::keepInLocalMemory(e)
> that StoreController::keepInLocalMemory(e) will call, similar to how
> current StoreController::handleIdleEntry() is organized. Which checks to
> copy or move? I am thinking about the ones that do not depend on whether
> the entry is fully loaded, but others might make sense to.
>
>
> B2) Should there be more trimming checks, in addition to the ones in
> MemStore::considerKeeping()? I think so. When we start reading the entry
> from the server it is "in transit". It is not in the memory cache yet.
> We cannot keep all cachable "in transit" entries in RAM. There has to be
> a limit or we may run out of RAM. However, I do not see where we can get
> that limit.
>
> Should the shared memory cache keep a [shared] total size of these
> in-transit cachable entries and start trimming them if they are not
> going to fit? This will create a kind of "reservation" system for
> in-transit cachable entries. However, the entry size may be
> changing/growing so we do not know how much to reserve in general. We
> can cheat today because the current shared memory cache uses fixed-size
> slots (but we will have to solve this problem later).
>
>
> How did the old (before SMP changes) code solved that problem? What
> prevented it from creating too-many in-transit cachable entries, keeping
> them all in RAM, and eventually running out of RAM if the conditions are
> right? Nothing??
>

AFAIK nothing. Though I may be mistaking.

> If that is the case, then I suggest that we use the same logic and
> prevent trimming of cachable in-transit objects regardless of the
> current RAM usage state. The keepInLocalMemory(e) methods will ignore
> the current memory cache size and capacity then, solving (A2) and (B2):
> If it might be cached later in RAM, it should not be trimmed now, even
> if the memory cache is full.
>

That sounds like an easy solution for now. Later we should consider
implementing more accurate (and complex) checks for freeing in-transit
memory, like what you describe above.

> If we do that, we should add some source code comments to document this
> understanding/assumption though.
>

Yes.

I will prepare a new patch, thanks for your review!

Regards,
  Dmitry

>
> Thank you,
>
> Alex.
Received on Thu Jul 05 2012 - 13:41:23 MDT

This archive was generated by hypermail 2.2.0 : Thu Jul 05 2012 - 12:00:03 MDT