Re: [squid-users] Re: Re: [squid-users] centralized storage for squid

From: Mark Nottingham <mnot@dont-contact.us>
Date: Tue, 11 Mar 2008 09:23:52 +1100

This is the problem that CARP and other consistent hashing approaches
are supposed to solve. Unfortunately, the Squid in the front will
often be a bottleneck...

Cheers,

On 07/03/2008, at 1:43 PM, Siu Kin LAM wrote:

> Hi Pablo
>
> Actually, it is my case.
> The URL-hash is helpful to reduce the duplicated
> objects. However, once adding/removing squid server,
> load balancer needs to re-calculate the hash of URL
> which cause lot of TCP_MISS in squid server at the
> inital stage.
>
> Do you have same experience ?
>
> Thanks
>
>
> --- Pablo Garc燰 <malevo@gmail.com> 說:
>
>> I dealt with the same problem using a load balancer
>> in front of the
>> cache farm, using a URL-HASH algorithm to send the
>> same url to the
>> same cache every time. It works great, and also
>> increases the hit
>> ratio a lot.
>>
>> Regards, Pablo
>>
>> 2008/3/6 Siu Kin LAM <sklam2005@yahoo.com.hk>:
>>> Dear all
>>>
>>> At this moment, I have several squid servers for
>> http
>>> caching. Many duplicated objects have been found
>> in
>>> different servers. I would minimize to data
>> storage
>>> by installing a large centralized storage and the
>>> squid servers mount to the storage as data disk.
>>>
>>> Have anyone tried this before?
>>>
>>> thanks a lot
>>>
>>>
>>> Yahoo! 網上安全攻略,教你如何防範黑客!
>> 請前往http://hk.promo.yahoo.com/security/index.html
>> 了解更多。
>>>
>>
>
>
>
> Yahoo! 網上安全攻略,教你如何防範黑客! 請前往http://hk
> .promo.yahoo.com/security/index.html 了解更多。

--
Mark Nottingham       mnot@yahoo-inc.com
Received on Mon Mar 10 2008 - 16:24:32 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Apr 01 2008 - 13:00:05 MDT