Re: aio stuff

From: Stewart Forster <slf@dont-contact.us>
Date: Fri, 12 Jun 1998 12:22:32 +1000

--MimeMultipartBoundary
Content-Type: text/plain; charset=us-ascii

Hi Andres,
 
> I was thinking that reducing poll timeouts for aio stuff not to
> idle too much was not that good idea, and was wondering, why
> didn't you decide to use a pipe to signal back to main thread?
> something like this:
>
> - we needed to reduce poll() timeout, to service ready IO events
> - we do not want to send any signal to process itself
> - yet even 10msec timeout might be high if we are paranoid.
> - poll awakes when there is any FD ready.
>
> why not:
> - open a pipe inside squid
> - one FD in poll() waiting for reading,
> with read handler set to aioCheckCallbacks().
> - other FD available for threads, to signal completion back. Writing single
> byte into that FD should wake poll() and yet allow for reliable
> delivery of notices. And as pipe buffers may get full, threads
> should/would block on writes into there.
>
> This way, we have 1 FD for all threads, and 1 FD for poll to wake up
> timely. No need to call aiochecks specially before polling, and no
> being too late servicing aiostuff.
>
> Or am i missing something that made you avoid such approach?

        I coded up the pipe version you mentioned just for testing purposes
about 12 months ago. It does work but seemed to create far greater load for
many threads (pipe read/write sychronisation issues for the kernel I assume)
as opposed to just polling finished threads frequently. A write/read is
required for every operation (instead of just writing) otherwise the pipe is
always ready for reading and it just busy-waits.

        Under low loads the single pipe solution worked well, but for high
loads (>50 TCP hits/sec on our Solaris 2.5.1 boxes with 32 threads) there
were significant problems with threads queues blowing out and the whole lot
ran slower than just by polling. I was concerned about the CPU wastage
and the issues that you mentioned above (is 10ms fast enough?) but it seemed
to be the best compromise solution. The poll solution was very CPU cheap,
added no complexity and is in general adequate. At 10ms timeouts, this
would still allow for 100 read/write operations per second, or for a single
large file transfer from cache and an 8K transfer block size, 50x8=400Kb/sec
at minimum load. If you needed faster, just reduce the timeout further.
Most users have dedicated cache boxes anyway.

        So, bottom line is that both solutions work and both have their
weaknesses. Given that the poll approach deprecates to no excessive polling
under high loads (ie. no additional work) and high load performance was what
I cared about, I chose the poll approach even though it was a little more
expensive where we could afford it be (ie. under low loads).

        Stew.

-- 
Stewart Forster (Snr. Development Engineer)
connect.com.au pty ltd, Level 9, 114 Albert Rd, Sth Melbourne, VIC 3205, Aust.
Email: slf@connect.com.au   Phone: +61 3 9251-3684   Fax: +61 3 9251-3666
--MimeMultipartBoundary--
Received on Tue Jul 29 2003 - 13:15:51 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:11:49 MST