[squid-users] Re¡G Re: [squid-users] centralized storage for squid

From: Siu Kin LAM <sklam2005@dont-contact.us>
Date: Fri, 7 Mar 2008 10:43:14 +0800 (CST)

Hi Pablo

Actually, it is my case.
The URL-hash is helpful to reduce the duplicated
objects. However, once adding/removing squid server,
load balancer needs to re-calculate the hash of URL
which cause lot of TCP_MISS in squid server at the
inital stage.

Do you have same experience ?

Thanks

--- Pablo García <malevo@gmail.com> »¡¡G

> I dealt with the same problem using a load balancer
> in front of the
> cache farm, using a URL-HASH algorithm to send the
> same url to the
> same cache every time. It works great, and also
> increases the hit
> ratio a lot.
>
> Regards, Pablo
>
> 2008/3/6 Siu Kin LAM <sklam2005@yahoo.com.hk>:
> > Dear all
> >
> > At this moment, I have several squid servers for
> http
> > caching. Many duplicated objects have been found
> in
> > different servers. I would minimize to data
> storage
> > by installing a large centralized storage and the
> > squid servers mount to the storage as data disk.
> >
> > Have anyone tried this before?
> >
> > thanks a lot
> >
> >
> > Yahoo! ºô¤W¦w¥þ§ð²¤¡A±Ð§A¦p¦ó¨¾½d¶Â«È!
> ½Ð«e©¹http://hk.promo.yahoo.com/security/index.html
> ¤F¸Ñ§ó¦h¡C
> >
>

      Yahoo! ºô¤W¦w¥þ§ð²¤¡A±Ð§A¦p¦ó¨¾½d¶Â«È! ½Ð«e©¹http://hk.promo.yahoo.com/security/index.html ¤F¸Ñ§ó¦h¡C
Received on Thu Mar 06 2008 - 19:43:22 MST

This archive was generated by hypermail pre-2.1.9 : Tue Apr 01 2008 - 13:00:04 MDT