Re: A question regarding rock cache_dir.

From: Alex Rousskov <rousskov_at_measurement-factory.com>
Date: Fri, 20 Sep 2013 09:47:43 -0600

On 09/19/2013 07:50 PM, Eliezer Croitoru wrote:

> 1. how is the rock organized compared to ufs?

Rock store uses a single database-style file that holds all information
about stored entries (as opposed to one entry per file approach used by
ufs). In the future, Rock store may start to use raw disk partitions as
well.

> 2. what is the rock file structure?binary I assume..?

The basic structure is a fixed-size database header followed by a
sequence of fixed-size database records or "slots". Besides the source
code itself, Rock store is documented at

http://wiki.squid-cache.org/Features/RockStore
http://wiki.squid-cache.org/Features/LargeRockStore

> 3. assuming we want to *purge* from the rock cache_dir an object when
> the server is offline what would be the pseudo code to do that?

  calculate the anchor slot position using the entry key and slot size
  load the entry meta data from the anchor slot on disk
  if the anchor slot is occupied by the entry you want to purge {
      mark the slot as empty
      write the slot [metadata] back to disk
      optional: update other slots occupied by the entry (Large Rock)
  }

Note that the above can be done both on a disconnected or live store.
When purging a live store, you would need to use/update the store maps
that Squid loads into shared memory (Ipc::StoreMap and its kids).

For a disconnected store, the overall algorithm is very similar to ufs
if you replace "file" with "[anchor] slot". In the official code, anchor
slot is the only slot used for the entry. Large Rock uses a chain of
slots to store additional entry data if needed, removing the "all
entries must fit into one-slot" limitation.

HTH,

Alex.
Received on Fri Sep 20 2013 - 15:48:05 MDT

This archive was generated by hypermail 2.2.0 : Sat Sep 21 2013 - 12:00:11 MDT