Re: Rock Store: suggestions on object size restriction to Ipc::Mem::pageSize()

From: Alexander Komyagin <komyagin_at_altell.ru>
Date: Mon, 23 Jul 2012 12:45:49 +0400

On Fri, 2012-07-20 at 10:22 -0600, Alex Rousskov wrote:
> On 07/20/2012 08:28 AM, Alexander Komyagin wrote:
> > Hi! I've taken a look at the code related to object caching and found
> > out only two places where this restriction (hard coded to 32K) is
> > actually applied:
> >
> > 1) DiscIO/IpcIoFile.cc: when Squid worker pushes an i/o request to
> > disker via IpcIoFile::push() and disker handles that request with
> > DiskerHandleRequest(). IpcIoMsg object contains the memory page for i/o.
> > Before and after i/o plain byte arrays are used for data storage.
> >
> > So why not to use an array of pages for i/o here instead of one single
> > page?
> > We know the exact object size here so we can easily calculate the number
> > of pages needed to load/store and object.
>
> Properly locating, locking, and securely updating a single shared page
> is much easier than doing so for N pages. We will support multi-page
> shared caching eventually, but it is far more complex than just
> calculating the number of needed pages (N), especially if you do not
> want to reserve all pages in advance.
>
> You found where the 32KB page size limit is used. The other, far more
> important limit that is implicit in the current code is the number of
> shared pages per object that the current algorithms support. That limit
> is 1.
>
>

Probably I missed those algorithms, can you point them out for me, so I
can take a look?

> > 2) Shared memory (MemStore, shm): Squid stores a map 'store_key -
> > MemStoreMap::Extras', where MemStoreMap::Extras contains a memory page
> > with stored data. Just like with IPC I/O, we could use an array of pages
> > here and adapt MemStore::copyToShm() and MemStore::copyFromShm()
> > functions.
> >
> > Are there any other places where Ipc::Mem::pageSize() restriction takes
> > effect?
> >
> > I think all Squid users are interested in caching large objects. In
> > particular, 32K is too small even to cache a GIF with a cute kitty!
>
> Well, not all Squid users are interested in caching at all, but yes,
> there is consensus that supporting shared caching of large files is a
> highly desirable feature:
> http://wiki.squid-cache.org/SquidFaq/AboutSquid#How_to_add_a_new_Squid_feature.2C_enhance.2C_of_fix_something.3F
>
> One of the biggest challenges for multi-page shared caching support is
> to come up with a simple page allocation algorithm that
>
> (a) can be implemented in shared memory without complex locks;
>
> (b) will not require us to duplicate what the regular file systems do
> to manage multi-block allocation; and
>
> (c) will not result in severe fragmentation and other issues that will
> prevent its use in high-performance environments (which are the focus of
> Rock store).
>
> However, before all of that exciting work can happen, we should finish
> cleaning up major Store APIs, IMO (e.g., remove global store_table and
> add a dedicated memory cache class for non-shared caches). Otherwise, we
> will constantly bump into current API limitations and bugs, while adding
> more hacks and bugs.

OK. I think that once the cleanup is done and Store API is fixed (though
is shall take some time), multi-page caching support won't be a big
problem. I bet you already have some ideas on how to implement it. Or
not? :)

>
>
> Cheers,
>
> Alex.

-- 
Best wishes,
Alexander Komyagin
Received on Mon Jul 23 2012 - 08:48:05 MDT

This archive was generated by hypermail 2.2.0 : Tue Jul 24 2012 - 12:00:04 MDT