Re: [squid-users] tune up

From: Wennie V. Lagmay <wlagmay@dont-contact.us>
Date: Mon, 30 May 2005 13:35:17 +0300

Since im using diskd and im going to change it to aufs do i need to
recompile the squid? if so is there other way of enabling aufs without
recompliling? also can I change my configuration from diskd to aufs
directly?

thanks,

wennie

----- Original Message -----
From: "Steven Wilton" <swilton@q-net.net.au>
To: <squid-users@squid-cache.org>
Sent: Monday, May 30, 2005 10:42 AM
Subject: RE: [squid-users] tune up

>
> If you're referring to my postings about a month ago, I've been doing some
> further tests after getting some pointers from different people, and the
> results are different.
>
> We have a number of sets of proxies in different locations, each set being
> load-balanced using wccp and layer 3 switch. My results were different
> when
> comapring caches with lower loads (Avg 39 req/sec, peak 70 req/sec) than
> when I was comparing caches with higher loads (170 req/sec avg, peak
>>300req/sec).
>
> I was using the aufs cache_dir type, as I have found this to be
> significantly faster than diskd when running on linux. The different
> paramaters that I was comparing were the load average (with aufs, as disk
> i/o increases, there will be more threads waiting on disk i/o, which will
> push the load average up), the disk utilisation (% time each disk had
> active
> operations) and cpu utilisation.
>
> I found that under low loads, ext3 mounted with data=writeback (the same
> level of data protection as other journalled filesystems) gave the best
> numbers (ie lower CPU, lower disk utilisation and lower load average.
>
> I found that on our more loaded systems, reiserfs had lower disk
> utilisation
> and a lower load average, at a slight cost of CPU time.
>
> So, if the disk i/o is going to be a bottleneck (as it is in our case),
> reiserfs is probably a better choice. If CPU is the main bottleneck, then
> ext2/3 may be the best choice.
>
> It also looks like reiserfs may use more resources under low load, but
> scales better at the higher loads. This confirms the results of previous
> benchmarks that show reiserfs to provide the highest throughput for a
> squid
> proxy server (using the Web Polygraph program).
>
> Steven
>
>> -----Original Message-----
>> From: Wennie V. Lagmay [mailto:wlagmay@yanbulink.net]
>> Sent: Monday, May 30, 2005 12:49 PM
>> To: Henrik Nordstrom
>> Cc: Henrik Nordstrom; azeem ahmad; squid-users@squid-cache.org
>> Subject: Re: [squid-users] tune up
>>
>> Another question, regarding file system, Im using reisersfs
>> for my cache
>> partition and I've read that ext3 is faster than reiserfs, If
>> it so, is
>> there a way or an option to make reiserfs as fast as ext3?
>> what are the
>> parameters to be used for fstab to make reiserfs fast?
>>
>> In your experience which is the best file system for squid?
>>
>> Thank you very much,
>>
>> wennie
>> ----- Original Message -----
>> From: "Henrik Nordstrom" <hno@squid-cache.org>
>> To: "Wennie V. Lagmay" <wlagmay@yanbulink.net>
>> Cc: "Henrik Nordstrom" <hno@squid-cache.org>; "azeem ahmad"
>> <azeem81@msn.com>; <squid-users@squid-cache.org>
>> Sent: Saturday, May 28, 2005 6:41 PM
>> Subject: Re: [squid-users] tune up
>>
>>
>> > On Sat, 28 May 2005, Wennie V. Lagmay wrote:
>> >
>> >> Its only now that I knew this cache_dir issue, Im using
>> FC2 64 bit and
>> >> using diskd for my cache_dir. Is ther a way to migrate my
>> cache_dir to
>> >> aufs without harming my cache server.
>> >
>> > Yes. Modify squid.conf and restart your Squid.
>> >
>> >> cache_dir aufs /cache1/spool/squid 25000 16 256
>> >> cache_dir aufs /cache2/spool/squid 25000 16 256
>> >> cache_dir aufs /cache3/spool/squid 25000 16 256
>> >
>> > Regards
>> > Henrik
>>
>>
>> --
>> No virus found in this incoming message.
>> Checked by AVG Anti-Virus.
>> Version: 7.0.322 / Virus Database: 267.2.0 - Release Date: 5/27/2005
>>
>>
>
> --
> No virus found in this outgoing message.
> Checked by AVG Anti-Virus.
> Version: 7.0.322 / Virus Database: 267.2.0 - Release Date: 5/27/2005
>
>
Received on Mon May 30 2005 - 04:35:25 MDT

This archive was generated by hypermail pre-2.1.9 : Wed Jun 01 2005 - 12:00:04 MDT