Re: [squid-users] access.log redundancies and page cost

From: Phil Lucs <harrylucs@dont-contact.us>
Date: Thu, 4 Sep 2003 08:35:48 +1000

> It does. The only complications is how to find a good way of merging
> the access logs from two or more proxies to build the request graphs.
> It is not always trivial to identify which requests on a upstream
> proxy belongs to the downstream request, but by keeping the time in
> synch (i.e. NTP) and sorting on time you can most likely make it
> without to much fuzz.
>
> Regards
> Henrik

Ok, this is making some sense to me. I'm thinking that we sort based on
time, then there should be some millisecond - second discrepancy between
each forwarded cache request and then we can follow the path until a HIT or
MISS (go to ISP) is encountered. To further make things a little safer some
access.log checks can be made, such as making checks on content, and url
requested, or an algorithmic check from each node - by keeping a handle to
the previous node to make sure it requested the same content, url requested
and for the return path, the same number of bytes.

All the best,
Phillip Lucs
Received on Wed Sep 03 2003 - 16:33:08 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:19:31 MST