Re: Async I/O on IRIX 6.x?

From: Andres Kroonmaa <andre@dont-contact.us>
Date: Tue, 15 Sep 1998 10:42:02 +0300 (EETDST)

On 14 Sep 98, at 15:48, Alex Rousskov <rousskov@nlanr.net> wrote:

> On Tue, 15 Sep 1998, Andres Kroonmaa wrote:
>
> > And how do you signal threads?
>
> Using cond_signal. The current code works as follows. When a request is
> enqueued and there are threads waiting, I send a cond_signal. The condition
> predicate is "queue is not empty". A [single] mutex protects a queue of
> requests. Enqueue/dequeue/wait operations require locking (wait and dequeue
> share the lock, of course). There are only two queues: wait_queue (incoming
> requests) and done_queue (processed requests). The second queue requires no
> cond_waiting and dequeues all requests at once. A thread gets requests from
> wait_queue and puts them into done_queue. Main thread does the opposite.

 That sounds very correct. After that comes the real life ;) most posix systems
 would do what they promise, but some could do that at expense.
 I'm not sure which systems, but I've heard few such cautions (you better check
 before you believe me here):
 - although cond_signal() is supposed to unblock only 1 thread, some implementations
   (due to reason I'm not sure of) may unblock _all_ threads waiting on the same
   cond and let them compete for the mutex. As only one thread can proceed with
   a mutex and all their runtimes are timeshared it may happen that some thread
   has finished processing and have released the mutex before some have finished
   "competing for it". This is one reason why they warn against rare "spurious
   wakeups" I guess.
 Even if this does not cause spurious wakeups, such implementations can cause
 spikes of thread switches upon each cond_signal (internal to lib). I guess that
 thats also one reason why some coders prefer point-to-point mutexes, or mutex
 per thread. This might increase predictability... ;)

 I'm not sure why Stew has elected to use them that way. Maybe because of this
 consideration, maybe because he was just trying to resolv the race condition
 he has commented.

> This all may change as the code evolves, of course.
>
> Since the old code did not work for us, we had to change a lot of things here
> and there. The current code works. Now I am trying to identify where we
> improved the old code, and where we did not, and take the best of the two
> versions.

> > > Also, without mutexes, the thread
> > > scheduling itself was "unpredictable" as some (all?) pthread man pages
> > > suggest.
...
> > > That's not what the man pages say though.
> > what pages?
>
> Pthread man pages on IRIX and DEC pthread manual on the Web at
> http://www.unix.digital.com/faqs/publications/base_doc/DOCUMENTATION/V40D_HTML/V40D_HTML/AQ2DPDTK> I guess, they all comply with the POSIX standard in that part so I expect
> others to say the same.

 I'm not sure where did you read out the contrary, but looking at pthread_cond_signal:

 What they mean is that if you have not acquired a mutex of cond var, then after
 signalling another thread it _may_ start running immediately (caused either by a
 thread switch or another free CPU). By holding a mutex you can be sure that the
 another thread is not run until you release the mutex associated with a cond.

 That does not contradict with what I said, predictability is localised, and the only
 means of doing that is by mutexes. The only thing you can predict is that while you hold
 the mutex, no other thread competing for that same mutex can be running at the same
 time. Thats it. You can't predict a thread switch, even not by calling sched_yield
 past releasing the mutex. The thread _may_ switch, but them again, it _might_ not.

 ----------------------------------------------------------------------
  Andres Kroonmaa mail: andre@online.ee
  Network Manager
  Organization: MicroLink Online Tel: 6308 909
  Tallinn, Sakala 19 Pho: +372 6308 909
  Estonia, EE0001 http://www.online.ee Fax: +372 6308 901
 ----------------------------------------------------------------------
Received on Tue Jul 29 2003 - 13:15:54 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:11:55 MST