Re: [squid-users] Cache API

From: Abhishek Chanda <abhishek.lists_at_gmail.com>
Date: Mon, 9 Jul 2012 12:32:31 -0700

Hi Amos,

I was wondering if there is a documentation for the fields reported by
CacheManager. I was looking at the objects report and I assumed, 'File
0XFFFFFFFF' means that the hex code is a hash of the file and 'GET
http://www.iana.org/domains/example/' means that the original
requester issued a HTTP GET for the page
http://www.iana.org/domains/example/. Is that correct?
Also, I could not find any documentation for squidpurge and ufsdump in
the website. Where can I find one?

Sorry for all the naive questions and thanks for you help.

Thanks

On Sun, Jul 8, 2012 at 12:27 PM, Abhishek Chanda
<abhishek.lists_at_gmail.com> wrote:
> Hi Amos,
>
> Thanks for the detailed response. This helps.
>
> Thanks
>
> On Fri, Jul 6, 2012 at 8:18 PM, Amos Jeffries <squid3_at_treenet.co.nz> wrote:
>> On 7/07/2012 5:41 a.m., Abhishek Chanda wrote:
>>>
>>> Hi Amos,
>>>
>>> I need to have a list of all files cached in a network which has
>>> multiple instances of Squid running. So, I was looking for an API to
>>> query the cache and retrieve metadata about the files there. Is there
>>> a better way to do this?
>>
>>
>> "Files"? what files?
>>
>> HTTP is a generic information transfer protocol, not a file access protocol.
>> Some of those resources are "files" on the origin server but the large
>> majority are not even that. This is somewhat betrayed by that objects
>> report, which would probably be called "files" if that were what Squid deals
>> with.
>>
>> The well-known UFS storage model uses system files as places to store the
>> cache data. But there is no relationship between the on-disk UFS filename
>> and content stored there beyond a hash code in Squid memory. Squid uses
>> these disk files like most programs use virtual RAM / swap disk, to swapout
>> things (or bits of things) which may still be useful in future but are
>> taking up too much memory space to keep there.
>>
>> The best way is to query the Squid manager component the cachemgr.cgi
>> program is just a helper that does queries to access that information. The
>> manager has a HTTP request based API.
>> http://wiki.squid-cache.org/Features/CacheManager
>> http://wiki.squid-cache.org/Features/CacheManager/Objects
>>
>> If you notice from that second page the example report, there is a mix of
>> cached objects. Some have URLs, some only have file code numbers (eg "Swap
>> Dir 0, File 0X00D05A"). HTTP is not restricted to web pages and most web
>> pages are not actually built from storable files.
>>
>> For on-disk storage analysis there is squidpurge and ufsdump tools for
>> UFS/diskd/AUFS cache storage model, cossdump tool in squid-2.7 for COSS
>> storage model. We do not have anything currently to dump out a report of the
>> new rock storage database content. The in-memory objects are in that
>> vmobjects report.
>>
>> Amos
>>
>>
>>>
>>> Thanks for the link Waitman, I will look into it.
>>>
>>> Thanks
>>>
>>> On Fri, Jul 6, 2012 at 8:40 AM, Waitman Gobble <waitman_at_waitman.net>
>>> wrote:
>>>>
>>>> On 7/6/2012 12:00 AM, Amos Jeffries wrote:
>>>>>
>>>>> On 6/07/2012 12:15 p.m., Abhishek Chanda wrote:
>>>>>>
>>>>>> Hi all,
>>>>>>
>>>>>> Does Squid have an API to query the content of the cache? I am aware
>>>>>> of contentmgr.cgi, but I am looking for an API that I can call from my
>>>>>> code.
>>>>>>
>>>>>> Thanks
>>>>>
>>>>>
>>>>> Why would your code want to reach into the code of another program and
>>>>> do
>>>>> things?
>>>>>
>>>>> What are you trying to achieve?
>>>>>
>>>>> Amos
>>>>
>>>>
>>>> Hi,
>>>>
>>>> An alternative you may want to check out is an e-cap module. Here is a
>>>> simple example which stores the chunks in mongodb. It is possible to
>>>> combine
>>>> chunks into complete documents/etc however it seems to perform much
>>>> better
>>>> if you stuff the chunks and combine them later.
>>>>
>>>> https://github.com/creamy/ecap-mongo
>>>>
>>>> Waitman Gobble
>>>> San Jose, California
>>>>
>>
>>
Received on Mon Jul 09 2012 - 19:32:39 MDT

This archive was generated by hypermail 2.2.0 : Tue Jul 10 2012 - 12:00:02 MDT