Re: Squid-3.2 status update

From: Alex Rousskov <rousskov_at_measurement-factory.com>
Date: Thu, 22 Sep 2011 11:49:33 -0600

On 09/21/2011 10:27 PM, Amos Jeffries wrote:
> SMP rock storage changes hit 3.2 in r11342 a few minutes ago. Expect
> some new bugs to arrive and some to die mysteriously.

> * installations with no workers are expected to only see small benefits
> via the existing storage code streamlining and bug fixes.

And even small benefits are not currently guaranteed. There are probably
more new bugs than improvements if you are not into SMP caching,
especially in corner cases. Non-SMP setups were not the focus of this work.

> * installations with workers will automatically get the SMP shared
> memory caching. This seems to be the newest most experimental of the
> whole update. So fair warning: expect new bugs in this area.

If you use SMP caching and do not want shared memory caching, see
memory_cache_shared in squid.conf.

From developer point of view, the biggest change is that shared caching
requires using memory cache via Store APIs instead of using a
process-global store_table whenever you are curious about something
related to caching. Currently, store_table is still used for intransit
objects and possibly other half-cached entries (leading to bugs). We
still need to wean a lot of Squid code from store_table. Eventually,
that global should be removed.

> * installations choosing to explicitly configure "rock" cache_dir type
> get that. This is the older part with prior production use as a 3.1
> branch. Some changes made for SMP support. So a potential risk of new
> bugs, but hopefully not very many.

I would say that a _lot_ of changes were made for SMP support. The
underlying database design and Store API code is mostly the same, but
v3.1 code used blocking I/O within the same Squid process. The
now-official code is meant for use with disker kid processes, which
means a lot of internal changes and complications.

There are also quite a few feature holes. For example, Rock cache_dir
hot reconfiguration is not yet supported. Statistics is lacking. Squid
uses a lot more memory for shared I/O pages than it needs and there is
no good way to control that.

> Project details are http://wiki.squid-cache.org/Features/RockStore.
> Although lacking config how-to at present.

If you use Rock store, use round-robin cache_dir selector.

If you use Rock store and your disks are close to 100% utilized, you
probably need to use max-swap-rate and swap-timeout options. Remember
that diskers cannot slow down workers so they can be overwhelmed with
swap requests.

> Also, we have to decide now whether or not to drop COSS support from
> 3.2. Given that rock storage fills the same architectural niche of
> efficient in-memory disk backed persistent storage for small objects on
> high performance systems.
>
> Votes please? (if you want to keep it please say why)

I would propose the following decision tree:

  1. If it can compile and start now, keep it.
  2. If it can compile and start before Rock changes, keep it.
  3. Otherwise, drop it from trunk in one year and do not
     require COSS updates when something it uses changes.

Personally, I wish COSS was available and worked great in Squid3, but
nobody seems to want it badly enough to do something about it. We should
keep it as long as there is a reasonable chance somebody will pick it up.

When we started working on Rock Store, comparing COSS with ufs was very
useful, but Rock interpretation of Store APIs is now a better starting
point for any new store, IMO.

HTH,

Alex.
Received on Thu Sep 22 2011 - 17:50:25 MDT

This archive was generated by hypermail 2.2.0 : Fri Sep 23 2011 - 12:00:03 MDT