Re: [squid-users] Squid limits and hardware spec

From: Ow Mun Heng <Ow.Mun.Heng@dont-contact.us>
Date: Thu, 02 Dec 2004 14:06:54 +0800

On Thu, 2004-12-02 at 13:13, Martin Marji Cermak wrote:
> Ow Mun Heng wrote:
> > On Mon, 2004-11-29 at 11:32, Martin Marji Cermak wrote:

> >>USED CONFIGURATION:
> >>maximum_object_size 51200 KB (SHOULD I MAKE IT HIGHER ???)
> >
> > I made mine to cache up to 40MB only. If you really want to have more
> > byte hit ratio, then by all means, up the max_obj_size.
>
> OK, now I have:
> maximum_object_size 200 MB

That means your cache will store up to 200MB of each file.

You can even store ISO files if your users download Linux ISOs. Just
need to up that 200MB to say 800MB.

>
> >>cache_dir aufs /cache 25000 16 256
> >> (one ide disk, see the spec above)
> >
> >
> > This seems too low. I used 40GB of the 80GB drive
> OK, I changed it to
> cache_dir aufs /cache 92000 16 256

YOu might also want to change your L1 directories, for a 90GB cache,
only having 16 L1 directories may be overkill.

How to calculate L1 Dir: (30GB Cache)
x=Size of cache dir in KB (i.e. 30GB=~30,000,000KB) y=Average object
size (just use 15KB z=Number of directories per first level directory

(((x / y) / 256) / 256) * 2 = # of directories

30,000,000 / 15 = 2000000 / 256 = 7812.5 / 256 = 30 * 2 = 60

cache_dir aufs /squidcache/cache1 30000 60 256

Just out of curiousity, what is your cache's filesystem? Ext3? reiserfs?

Do you expect to have more _large_ files or more small files? I use
reiserfs. (anticipate more small files caches)

You can query the cache, but I can't rememeber what was the 'form' of
the query.

>
>
> >>cache_mem 8 MB
> > 200 MB. More being cached to memory. Faster retrieval.
> Thank you, nice. I just hope it does not start swaping :-)

How much of memory do yo have??

for a 90GB cache, and assuming 10MB RAM per 1GB cache, you better have
like 900MB RAM

> >
> > Say.. do you have any experience running a load balanced squid? I'm
> > wondering, since it's transparent, what happens if Squid Goes down? (for
> > X Reasons?) What happens to your ADSL users? (in the thousands??)
> I am in a testing phase, trying to find out what can just one squid
> handle - what are its limits. Then I will install a little Squid farm.
>
> If Squid goes down, it drops all established connections.
Yeah.. I figgured as much. My very own fear.

> So, I am
> supposed to have my Squid in a good shape :-), stable and running
> without stopping/crashing.
> The "thousands" means approx. 3500 users at the moment.
OK.. and they're all accessing 1 cache? Wow.

>
>
> > Are you logging a lot of things? If you are, your IDE disk may not be
> > able to sustain the throughput.
> Yes, you are righ, I was logging quite a lot. I modified the debug
> module a bit (I can set a debug level for each module, e.g.:
> debug_options ALL,1;14,2;99,4
> ) so now I log only info I need

Good on you.

> I will report some stats to the list, when I have more info (after I run
> squid in this configuration for more days).

Please do tell. I looking into how to implement squid in such an
environment.

I'm also looking into ultramonkey.org and linuxvirtualserver.org as a
means for load-balancing. But again, If not mistaken, the
Ultramonkey/LVS box will be the bottleneck/single point of failure.

>
> Have a nice day,
If you post back the results, I sure will.

> Marji

--
Ow Mun Heng
Gentoo/Linux on D600 1.4Ghz 
Neuromancer 13:58:24 up 4:09, 7 users, 0.51, 0.43, 0.23 
Received on Wed Dec 01 2004 - 23:09:12 MST

This archive was generated by hypermail pre-2.1.9 : Sat Jan 01 2005 - 12:00:01 MST