Re: [squid-users] Re?G Re: [squid-users] centralized storage for squid

From: Adrian Chadd <adrian@dont-contact.us>
Date: Fri, 7 Mar 2008 12:18:46 +0900

On Fri, Mar 07, 2008, Siu Kin LAM wrote:

> Actually, it is my case.
> The URL-hash is helpful to reduce the duplicated
> objects. However, once adding/removing squid server,
> load balancer needs to re-calculate the hash of URL
> which cause lot of TCP_MISS in squid server at the
> inital stage.
>
> Do you have same experience ?

This is the sort of stuff that the Cisco implementation of WCCPv2
got "right".

Ie, when a cache dropped out it wouldn't recalculate the hash
immediately. It'd maintain the existing allocations and slowly
move the hash buckets of the failed cache over to new caches.

When the proxy would come back the hash bucket allocation would slowly
revert to how it was.

You should poke your L4 balancer vendor and explain the sitaution. ;)
I'm surprised none of them have done it better.

That said, have you investigated using ICP or cache digests between your
proxies?

Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -
Received on Thu Mar 06 2008 - 20:04:15 MST

This archive was generated by hypermail pre-2.1.9 : Tue Apr 01 2008 - 13:00:04 MDT