Re: aio_read implementaion in Squid

From: Joe Cooper <joe@dont-contact.us>
Date: Thu, 13 Sep 2001 16:26:42 -0500

Andres Kroonmaa wrote:

> On 13 Sep 2001, at 15:30, Joe Cooper <joe@swelltech.com> wrote:
>
>
>>Andres Kroonmaa wrote:
>>
>>>On 13 Sep 2001, at 18:01, Venkatesh <gv_kovai@yahoo.com> wrote:
>>>
>>>
>>>>* But Before doing that, I should get fair decision of what I/O will be good
>>>>for Squid in all aspects to achieve 1k req/sec.
>>>>
>>> Current squid can easily handle 1k req/sec without disks involved. In last
>>> days one of our reverse-proxies was handling 540 req/sec with single disk,
>>> although its memhitrate is at 90-95%...
>>> Next bottleneck after disks is ACL's. If you use no or very few acls, only
>>> then comes io model into play.
>>> And while at that, it makes a huge difference, whether you service alot of
>>> slow clients or few fast clients. When you have few open sockets, poll model
>>> is very much okay.
>>>
>>1k reqs/sec? Are you sure? I wouldn't think so, even entirely from
>>RAM. 500, yes. I've seen that in memory only benchmarks (95% memhits),
>>but performance starts to degrade when going much above that number (at
>>least on an 800MHz Athlon--faster machines can obviously go a little
>>faster).
>>
>
> Well, y'know what benchmarks are. It all heavily depends what you service.
> I've never thought my box is something special. It was running 4 copies
> of Squid at the time, 2 of which were pretty mightily used. And it got
> 540/sec before I had to "tune" it some. s2.3 btw. Here most of objects
> serviced were 304 or small <1K bits. So only single pass on "pumping"
> was needed. It lagged. If you need more passes for large objects, things
> degrade very fast. I just think that if my setup did 540/sec then it could
> do better too. Well, I'm not so sure either ;)

Ah, yes. The avg. object size in my tests was 11k (modelled after real
web traffic). That makes a big difference indeed.

 
>>You could gain probably another 50-75% performance using dual
>>processors...
>>
>
> not sure how. I admit I have dual P3-800 here, but only 1 cpu was fully
> utilised.

Two Squid processes, load balanced via a simple iptables rule to split
traffic based on the last octet of the destination IP. Works strikingly
well--scaled our most recent shipment from barely getting by at 180
reqs/sec to extremely zippy at 230 reqs/sec and acceptably fast well beyond.

>>I've done tests of memory only Squid's, but nothing of alternate
>>networking i/o methods (since Squid only has one really).
>>
>>I would think TUX is probably a good place to look for network I/O
>>performance implementation ideas. Almost everything used in TUX has
>>
>
> yes, but portability goes out of window I guess...

Yes, unfortunately. It seems none of the OS folks (even the free ones)
want to see eye to eye on how to scale network event notification.
FreeBSD has kqueues, Linux has about three thousand* different signals,
event, and poll implementations in user and kernel space all competing
to become official, and I'm not sure what the proprietary OS versions
have other than the Solaris /dev/poll interface.

Thus the reason I think everyone (Henrik, Adrian and Robert, most of all
I gather) wants to see Squid become a broker for different front and
back end modules. i.e.

Network+protocol stack-->Squid broker-->Disk i/o module

Where any kind of network and protocol methods and disk i/o methods
anyone takes the time to implement can go at either end in the form of
modules (like storeio modules today, only moreso). While Squid sits in
the middle.

Just my impression of what I've read so far...Seems like a good idea to me.
                                   --
                      Joe Cooper <joe@swelltech.com>
                  Affordable Web Caching Proxy Appliances
                         http://www.swelltech.com
Received on Thu Sep 13 2001 - 15:21:20 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:14:20 MST