Re: A question regarding rock cache_dir.

From: Eliezer Croitoru <eliezer_at_ngtech.co.il>
Date: Mon, 07 Oct 2013 01:38:48 +0300

Thanks Alex for the detailed reposonse.

I have been reading the RockStore wiki and at start I couldn't
understand the description since the strucutre was a bit too much for a
second.

I wanted to write some basic code that would do couple "actions" in the
DB and this will make sure that I understood it.

By the way Rock ROCKS!!

once I will understand it a bit more I hope to write a small helper that
will do something in the DB file.

Eliezer

On 09/20/2013 06:47 PM, Alex Rousskov wrote:
> On 09/19/2013 07:50 PM, Eliezer Croitoru wrote:
>
>> 1. how is the rock organized compared to ufs?
>
> Rock store uses a single database-style file that holds all information
> about stored entries (as opposed to one entry per file approach used by
> ufs). In the future, Rock store may start to use raw disk partitions as
> well.
>
>
>> 2. what is the rock file structure?binary I assume..?
>
> The basic structure is a fixed-size database header followed by a
> sequence of fixed-size database records or "slots". Besides the source
> code itself, Rock store is documented at
>
> http://wiki.squid-cache.org/Features/RockStore
> http://wiki.squid-cache.org/Features/LargeRockStore
>
>
>> 3. assuming we want to *purge* from the rock cache_dir an object when
>> the server is offline what would be the pseudo code to do that?
>
> calculate the anchor slot position using the entry key and slot size
> load the entry meta data from the anchor slot on disk
> if the anchor slot is occupied by the entry you want to purge {
> mark the slot as empty
> write the slot [metadata] back to disk
> optional: update other slots occupied by the entry (Large Rock)
> }
>
> Note that the above can be done both on a disconnected or live store.
> When purging a live store, you would need to use/update the store maps
> that Squid loads into shared memory (Ipc::StoreMap and its kids).
>
> For a disconnected store, the overall algorithm is very similar to ufs
> if you replace "file" with "[anchor] slot". In the official code, anchor
> slot is the only slot used for the entry. Large Rock uses a chain of
> slots to store additional entry data if needed, removing the "all
> entries must fit into one-slot" limitation.
>
>
> HTH,
>
> Alex.
>
Received on Sun Oct 06 2013 - 22:39:01 MDT

This archive was generated by hypermail 2.2.0 : Mon Oct 07 2013 - 12:00:26 MDT