Re: [squid-users] throughput limitation from cache

From: Richard Mittendorfer <delist@dont-contact.us>
Date: Sat, 14 Jan 2006 16:02:27 +0100

Also sprach Henrik Nordstrom <hno@squid-cache.org> (Sat, 14 Jan 2006
14:26:20 +0100 (CET)):
> On Sat, 14 Jan 2006, Richard Mittendorfer wrote:
> >> Why I ask is because diskd is known to be somewhat slow on large
> >cache
> >
> > Not really large. 2x 1G. It's no storage bottleneck I believe.
>
> large cache hits == hits on largeish cached objects.

Oh, sure. Didn't had enough coffee this morning.. :-)
 
> >> hits in certain situations UNLESS there is sufficient traffic to
> >keep > Squid reasonably buzy (i.e. problems if you are the only user,
> >or very > few users). And the same for aufs in older versions of
> >Squid.
> >
> > See. Would fit.
>
> A quick test if this is your problem is to reconfigure your Squid to
> use the ufs cache_dir type.

7,30M/s. That helps. Little bit slower with aufs: 6,85M/s.

hmm.. However, aufs (posix-threads?) seem to like/malloc a lot of
Memory. Running on mere 256M Ram and offering a good many services,
Commited_AS klimbs to 550M (340M w/ diskd). And Squid hasn't been used
yet. I suppose it will get swapped out way more easily. Will consumed
memory be much higher with aufs in contrast to diskd(/ufs)?

I'll see in a few hours/days.

> Regards
> Henrik

THX ritch
Received on Sat Jan 14 2006 - 08:02:51 MST

This archive was generated by hypermail pre-2.1.9 : Wed Feb 01 2006 - 12:00:01 MST