Re: [squid-users] accelerator farm: optimizing the sibbling_hit

From: Ard van Breemen <ard@dont-contact.us>
Date: Mon, 3 Mar 2003 18:29:30 +0100

On Mon, Mar 03, 2003 at 05:31:42PM +0100, Henrik Nordstrom wrote:
> m?n 2003-03-03 klockan 15.57 skrev Ard van Breemen:
> > Hi,
> > I am busy optimizing an accelerator farm using 2.4.4.
> > It currently contains the following *easy* patches:
> > - remove updates into client_db, this makes sure that when you
> > have a major site, your accelerator won't run out of memory.
> > This means commenting out the few calls to clientdbUpdate
>
>
> Which already is done in the form of the "client_db off" directive in
> squid.conf..

Yes, I already noticed that. This means we can do a fast upgrade
to the latest version, and make patches that are not yet
obsoleted ;-). Unfortunately my current job will end within a
month, so I will not have the time to start an upgrade project
:-(. (Because it also means some testing etc...)

> > - Allow refreshes to be done by peers.
> > This means removing a test in neighbors.c, function
> > peerAllowedToUse, test: request->flags.refresh
>
> You also get the same or at least similar effect by setting
> "prefer_direct off" IIRC, but this probably only applies to parents.

Our config is like having a farm of sibblings, no parents.

> To allow refreshes via siblings you must also change Squid to not use
> "only-if-cached" when requesting the object from the sibling, or else
> the request will be rejected by the sibling.

But doesn't the sibbling only answer UDP_HIT when it has an
object cached which is not stale? (icp_stale_hit is off...)
It is already configured to allow fetches for other squids, so
even if the object would have been stale, it will be fetched
anyway.

> > I am now thinking of the next ICP improvement, because having the
> > same aging criteria on all accelerators will give the following
> > problems: it will expire on all servers at exactly the same time.
>
> I would probably solve this by adding some expiry fuzzyness on the web
> server, and also by changing Squid to allow multiple cache revalidations
> of the same object to be collapsed into one.

Hmmmm, the idea is to have one request to the web server, and
have that cached (using icp) by the complete farm. So we cannot
add fuziness to the web server. I therefore wanted to add the
fuziness to the accelerator. Of course: the most important thing
is that we want to have the multiple cache revalidations
collapsed into one, but I mean the multiple cache of all
peers. The problem I face is that at almost exactly the same
moment we get multiple requests of the same url at different
peers. By adding calculated fuziness into the farm, the url will
probably be refreshed DIRECT by only one peer. The others will
of course have a SIBBLING_HIT or a DIRECT_TIMEOUT.
Of course it would be beautiful if only one peer would say
something like UDP_MISS_WILL_FETCH... which would make all my
hackorish plans obsolete.

BTW: we are working with expire times between 30 seconds and 3
months. Fetch times can go up to 10 seconds depending on who
programmed that script :-(... Our peers are on the same network,
so there is almost no latency between them.

-- 
program signature;
begin  { telegraaf.com
} writeln("<ard@telegraafnet.nl> SMA-IS | Geeks don't get viruses");
end
.
Received on Mon Mar 03 2003 - 10:29:35 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:13:54 MST