Re: [SQU] reiserfs

From: Joe Cooper <>
Date: Wed, 03 Jan 2001 13:51:06 -0600

Florin Andrei wrote:

>> From: "Phil Pierotti" <>
>> nolog = turn off journalling
>> journalling only helps your machine recover from crashes
>> faster, and this box doesn't crash (yeah it's a risk, but one I'm
>> willing to take, 6 months and counting)
> I tend to disagree with this. I think (but i might be wrong) that the
> journaling stuff in ReiserFS doesn't affect too much the performance.
> And if the system DOES crash, it's better to have a journalised fs...

The use of nolog is of very little benefit, in my tests (pretty
extensive). It shows about 5% gain, maybe a little more. I don't
consider it an option, however, as system recovery after a powerfail is
vital...and reiserfsck doesn't really work very well.

> Basically, you're just using ReiserFS for its faster storage mechanisms,
> not for the journals. That's interesting... I did some tests with
> ReiserFS: creating A LOT of small files and/or directories in a loop -
> it is soooo much faster than Ext2! So, it's very nice to have ReiserFS
> for a caching proxy. I've heard rumours that Linus said that ReiserFS
> will get into the kernel as soon as 2.4.1 will appear. ;-)
> I just got an idea: ReiserFS is much better at working with many files
> in the same directory. How about changing the cache settings? I mean,
> let's try to store many files in the same subdirectory, not just 255. Do
> you think it will be an advantage?

We (myself, and the ReiserFS developers) thought this would, in fact, be
an advantage. But there are other problems preventing it from actually
being the case. 256 is faster than 1024, and faster than one big
honking directory containg every cached object. Has something to do
with locking, I seem to recall. (BTW- A non-async compile behaves is faster on one big directory containing all objects.
  But async on multiple 256-file directories is faster still.)

> (I'm not sure if those settings in the cache_dir are
> filesystem-dependant or not. If they are, maybe there are different
> optimal settings for different filesystems?... And maybe this depends on
> the OS too?...)

Yep. But 256 is a pretty good number for most OSs. Including Linux +

> Did you guys tried to obtain some form of sponsorship from a Big
> Company, in order to work more at Squid? I believe that Squid now is
> really ready for enterprise usage, but there are a lot of stoopid things
> that still have to be done. Like, as an example, this sort of tests -
> take all the filesystem-OS-CPU combinations and see what happens when
> you change the parameters on cache_dir and maybe on some other
> squid.conf entries...

If performance is your concern, this has been done extensively on Linux
by me and serveral others. Duane has done extensive testing on FreeBSD
(see the cacheoff numbers for Linux/Squid in Swell's entry and
FreeBSD/Squid in Duane's Squid team entry). Since those are the two
most popular OSs to run Squid on, I'm satisfied. ;-) But if you want
to test other combinations, I'm sure no one would try to stop you.
Testers with real workloads are needed for the latest DEVEL snapshots,
as the guys move 2.4 towards stable. So, if you'd like to really help
the development effort, try to find some crashers in a Squid 2.4-DEVEL
daily snapshot.

Take a look over at the Squid-dev mailing list for quite a lot of
performance discussion. It has been a primary topic of discussion for
several months, and much work is underway to improve Squid's memory,
disk (speed, not space), and CPU, efficiency.

The problems with Squid's performance are not really the realm of minor
tweaks, anymore (like how many files per directory, or how many threads,
or what type of compile--async, diskd, or single process). All of those
things have been quantified pretty thoroughly (at least on the popular
OSs), and discussed quite a bit here and on Squid dev. Check the
archives for more.

There was once occasional sponsorship from Big Companies, though that is
no longer really the case, and most Squid developers (particularly the
three most active and vital to the cause: Duane, Henrik, and Adrian) are
volunteers. That's not to say that there aren't paid developers working
on Squid as well (Threshold Networks is paying the ReiserFS team to work
on Squid part time, and Swell Technology pays someone part time for
Squid development, and I occasionally make myself useful to the
development effort--or at least try not to get in the way too much as I
peer intently over the shoulders of the guys actually doing all the hard
                      Joe Cooper <>
                  Affordable Web Caching Proxy Appliances

To unsubscribe, see
Received on Wed Jan 03 2001 - 12:48:00 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:57:20 MST