Re: [PATCH] OAuth 2.0 Bearer authentication

From: Alex Rousskov <rousskov_at_measurement-factory.com>
Date: Sat, 09 Aug 2014 17:38:37 -0600

On 08/09/2014 04:19 AM, Amos Jeffries wrote:
> On 5/08/2014 3:22 a.m., Alex Rousskov wrote:
>> On 07/31/2014 03:29 AM, Amos Jeffries wrote:
>>> A garbage collection TTL "cleanup_interval" is configurable and removes
>>> cache entries which have been stale for at least 1 hr.

>> While some old code still uses periodic cleanup, I think we should avoid
>> adding more code like that. Periodic cleanup leads to cache overflows,
>> memory usage surprises, and hot idle. Please remove old unneeded cache
>> entries when adding a new cache entry instead of checking the cache
>> state every hour.

> I agree. However this is a common algorithm for all of Squid
> authentication types. Updating it should be done as a separate action
> and cover more than just this auth scheme. In particular the core cache
> code is shared by Basic and Digest.

It may be acceptable to reuse the old caching code that uses periodic
cleanup. However, adding brand new caching classes with periodic cleanup
is a rather different matter. An "it is a common wrong approach" excuse
cannot be successfully applied to a completely new cache class IMO.

In the proposed patch, the caching with periodic cleanup is done by
TokenCache, which is a new type with various management code surrounding
it (essentially a new class for the purpose if this discussion).

>>> + // only clear tokens out of cache after 1 hour stale
>>> + // could make this configurable
>>> + time_t stalenessLimit = current_time.tv_sec - 60*60;
>>
>> Cache size should be(**) regulated in terms of memory usage (bytes), not
>> entry age (seconds). I believe estimating memory footprint of a cache
>> entry would be fairly straightforward in this case. Would you be willing
>> to convert the code to use size limits?
>
> I agree with the idea. If/when we can move to LruMap or equivalent that
> will happen. For now this is consistent with the other auth schemes, and
> sufficient as a temporary state.

Why cannot this code use the existing LruMap class from the start? Yes,
LruMap requires some adjustments to handle more class users, but what is
worse: Making those adjustments or committing code that introduces a new
wrong class?

The consistency reason hardly applies to a new cache class. Yes, some
existing caches use time-based capacity, but some do not. Why pick the
wrong model for the new stuff?

> The memory calculation is also not as simple as it seems.

I did not say it is simple. I said it is "fairly straightforward", which
is at least a notch above simple :-). I think it reflects the complexity
of those changes well.

> The SBuf
> content consists of the majority of memory usage and is dynamic at
> run-time. What is actually needed to account the memory size of a
> TokenCache node is:
>
> sizeof(TokenPointer)
> +
> sizeof(Token)
> +
> (*X)->b64encoded.store_->capacity

Sounds straightforward to me.

> Note that:
> 1) store_ is correctly private and SBuf::length() is not sufficient to
> account for most of the allocated MemBlob capacity being consumed.

I do not understand why exposing the underlying storage capacity
information for this purpose is a big deal.

> 2) the store_->capacity may be shared by Token, so freeing just this
> entry may not reduce memory usage by amount it "uses". Accounting for it
> may have caused the cache size to be over-estimated.

Sure, but we do not need a precise number. We only need a good estimate.
And if experience shows that using shared MemBlob capacity is not good
enough, we can always "cow" the tokens stored to make sure they own
their buffer exclusively.

Please note that the cache is responsible only for what it uses. It is
not responsible for the memory used by others, even if it is the same
memory. If a MemBlob is shared by a cache entry and something else in
Squid, removing that cache entry is sufficient as far as cache
obligations and accounting are concerned.

> 3) the whole calculation is hard-coded at compile time in LruMap. So we
> cannot call the SBuf size methods anyway.

I do not understand why making the LRU entry size variable for this
purpose is a big deal.

> I notice that the current use of LruMap for storing
> CertValidationResponse objects is also suffering from this problem. It
> fails to account for the data size used by a whole vector of error
> strings, and all of a X509 certificate memory allocation (we know those
> are huge with lots of linked details).

That bug is one of the motivations to do the right thing here. I do not
see why we should repeat a similar mistake twice. Let's learn from our
mistakes. And making LruMap more flexible would actually help with
fixing the certificate storage bug as well (although the problem with
that bug is far more difficult than adding variable entry size to LruMap).

>>> + /*
>>> + * For big caches we could consider stepping through
>>> + * 100/200 entries at a time. Lets see how it flies
>>> + * first.
>>> + */
>>> + Auth::Bearer::TokenCache *cache = &Auth::Bearer::Token::Cache;
>>> + for (Auth::Bearer::TokenCache::iterator itr = cache->begin(); itr != cache->end();) {
>>
>> I do not think we should do linear search like that. It is going to be
>> too slow for large caches and adding arbitrary "maximum number of
>> iterations" limits just adds more tuning headaches. Why not use an LRU
>> cache instead? IIRC, we have even added LruMap class for that recently.
>>
>
> No less efficient than the linear walk the LruMap does. A full scan is
> required for the time-based removal since entries are mapped by their
> key value not by expiry time.

To avoid linear search, an LRU-based cache, such as LruMap, uses two
indexes, not one:

* One index uses reference time (used for cleanup in LRU order);
* One index based on entry key (used by external code to find entries).

Thus, there should be no linear searches in a correctly implemented
LRU-based cache. I do not see such searches in LruMap, but if I missed
them, and they do exist, it is a bug.

> LruMap also uses raw pointers for storing its entries. These Token
> objects are ref-counted and if we store as raw pointers they will either
> leak or be freed while still linked by the cache.
> The only way around that seems to be storing Pointers to the list and
> making every access a double de-reference. Or re-implementing the entire
> LruMap template using Pointer and a dynamic size() calculation.

Yes, some LruMap adjustments will be necessary to make it more flexible,
but I do not think they require a class rewrite. If the necessity of
those adjustments is the key reason you do not want to use [what we both
agree is] a better approach, and you do not want to make them, I can
help with those adjustments.

> The existing cache model used by both Basic and Digest already is
> sufficient for the needs of this feature. Note that the periodic cleanup
> is an optimization for large caches, to reduce the impact of this linear
> search for stale objects instead of

AFAICT, you are not using the existing caching code for this new
feature. You are adding a new cache type along with a new cache
maintenance code (TokenCache). If the patch used some existing caching
code, I would probably not even discuss it, but that is not what is
happening in this case.

BTW, you could structure the proposal this way:

A) Either accept the code as is because, while I agree it uses the wrong
approach, I am not going to improve it (for various reasons outside of
squid-dev review scope or Squid Project control);

B) Or keep it as a patch until somebody needs it badly enough to do the
necessary legwork to massage the code into a better shape. I did my best
to get it this far, and somebody else can take it from here.

Nobody likes to make choices like that, but they are sometimes necessary
due to real-life constraints we all have to face. I know I had to make
proposals like that in the past. If that is the choice we have to make
now, let's be clear about it and make it.

Thank you,

Alex.
Received on Sat Aug 09 2014 - 23:38:53 MDT

This archive was generated by hypermail 2.2.0 : Sun Aug 10 2014 - 12:00:11 MDT