Re: [squid-users] access.log redundancies and page cost

From: Henrik Nordstrom <hno@dont-contact.us>
Date: Thu, 4 Sep 2003 09:13:10 +0200

On Thursday 04 September 2003 00.35, Phil Lucs wrote:

> Ok, this is making some sense to me. I'm thinking that we sort
> based on time, then there should be some millisecond - second
> discrepancy between each forwarded cache request and then we can
> follow the path until a HIT or MISS (go to ISP) is encountered. To
> further make things a little safer some access.log checks can be
> made, such as making checks on content, and url requested, or an
> algorithmic check from each node - by keeping a handle to the
> previous node to make sure it requested the same content, url
> requested and for the return path, the same number of bytes.

Complications:

The number of bytes may differ slightly due to header modifications.

The graph may have more than one branch in case there is multiple
possile paths and an error is encountered while exploring one of the
paths.

-- 
Donations welcome if you consider my Free Squid support helpful.
https://www.paypal.com/xclick/business=hno%40squid-cache.org
If you need commercial Squid support or cost effective Squid or
firewall appliances please refer to MARA Systems AB, Sweden
http://www.marasystems.com/, info@marasystems.com
Received on Thu Sep 04 2003 - 01:15:39 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:19:32 MST