Re: Squid performance on the bake-off

From: Glenn Chisholm <glenn@dont-contact.us>
Date: Sun, 11 Apr 1999 21:32:53 -0600 (MDT)

> Ok, there's the Unix FS handicap to account for, but we are talking
> about _more_than_six_times_ the performance, and in any case the tradition
> says that a well-tuned FFS (as is the case with FreeBSD) approaches 50%
> of the hardware maximum transfer rate.
        
        The problem is not so much the transfer rate, it is the fact that
we are dealing with massive numbers of small files. They are the cause of
the bad performance with a normal file system, just as the would be with
any multipurpose file system.

> Based on my experience with Squid, neither that CPU nor the disks can
> explain a 545% diference, but just maybe (depending on the total size
> and distribution of the cacheable data), perhaps 100% more RAM can
> explain at least a part of it. And the cost for twice (or even four
> times more) the RAM would be negligible, specially if compared to other
> alternatives (more/faster disks, etc). I'm ignorant enough about the
> Polygraph benchmark to be unable to elaborate further.

        The amount of RAM can explain none of it. Doubling the RAM in the
machine was just not an option, it would have required a entirly new
machine (512MB is all that fits on that motherboard).

        I have rerun the same tests that were run at the Bake-off back at
our lab on alternate hardware with twice the RAM and a great deal more
disk and CPU power and gained nothing, as long as we are using UFS. The
machine at the Bake-Off was not thrashing at all. Not that I am claming
that Squid did its best at the Bake-Off, I believe that Duane had about 10
minutes to set up the machine and decide what request rates to run. We
were all rather busy and it was not a priority.

        A few simple tests have shown that with alternative file systems
we have done a great deal better. Profiling the machine while runing with
UFS shows the thrashing that occurs in the file system. As we have said
before a custom file system will allow us to remove the major bottleneck
from Squid. Which will then start showing where Squid is inefficient and
other UNIX issues that are slowing us down. At the moment Duane is working
on SquidFS and when he has the time to get that finished we should start
being able to make squid much faster.

        People should remember that at the rates that squid exibited at
the Bake-Off it would quite happily service 5MB/Sec. Which is more than
enough for most Squid users.

glenn
Received on Sun Apr 11 1999 - 21:21:52 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:45:47 MST