Re: [squid-users] Squid Performance Issues - reproduced

From: Andres Kroonmaa <andre@dont-contact.us>
Date: Sat, 04 Jan 2003 17:03:46 +0200

On 4 Jan 2003 at 0:27, Henrik Nordstrom wrote:

> > > The likelyhood for the I/O thread to get scheduled "immediately" is
> > > rather high on a UP machine. Same thing on a SMP machine, but on a SMP
> >
> > hmm. likely - why?
>
> Why I do not know, but is how it behaves on UP systems, and I assume
> that on SMP systems the second CPU is put to use, and the scheduler has
> no reason to delay activation of threads which has been running on the
> second CPU (or not at all) if the second CPU is available.

 Well, I guess this is how it behaves on one UP system, one OS, at given
 load conditions. Yes, but don't generalise this. SMP systems will indeed
 put another CPU into use, but again, not necessarily immediately. Delay
 might be very small to detect, but point is that threads runnable on different
 CPUs are so much independant that they interact only indirectly via some
 runqueues. Under high loads, that delay might be large enough that IO
 thread isn't even run before another return from poll. It is likely to run, but
 you just don't ever rely on that. Best is if you do not need to care.

> Exacly, and my testing indicates you are right. A mutex unlock does not
> make a thread switch, not even a unlock+lock sequence while there is
> other threads waiting for a lock.

 Funny you say so. I just looked into linuxthreads sources and find that
 mutex_unlock indeed attempts to restart suspended thread if there are
 any blocked threads on given mutex. To do so, suspended thread is sent
 signal with kill(). So, whether thread switch happens depends on how
 kernel handles kill(). If it delivers it immediately, then thread switches too,
 if kernel just enqueues signal, then unlock returns fast.

> > If its allowed to happen, then previous
> > signal is lost, but man pages do not document signal loss, therefore
> > such case must be detected and resolved. One way is to forcibly make
> > thread switch on every single m_unlock which is cpu-buster, or somehow
> > mark mutex into intermediate state, denying new locks until scheduler
> > has been run.
>
> It is resolved, at least on glibc. The signal is not just a boolean. For
> each thread it is a boolean, but each signal signals exacly one thread
> which is then put on queue to be run by the scheduler. The next signal
> goes to the next thread in the cond queue.

 Yes, in linux thats true. Threads sit in linkedlist and cond_signal pops top
 one out of list. In addition, cond_signal also delivers literal kill() to popped
 thread, so thread switch may happen at cond_signal time too, before mutex
 is unlocked. If I know correct, then in linux threads are fullblown processes,
 and their switch is not much different from process switching. Will linux
 process switch task if one sends kill() signal to another?

> I assume there is some optimization involved detecting mutex locks etc,
> avoiding rescheduling of the signalled threads while the mutex is
> locked, possibly by not finishing the last steps of the delivery of
> cond_signal until mutex_unlock, but cond_signal is not mutex dependent
> so I am a bit confused on this point..

 no, no optimisations are done here. kill signals are sent left&right. If thread
 restarts before mutex is released, it spins on it and if still can't acquire,
 suspends back again. Another kill will be delivered at unlock time. So in
 worst case, there are few ping-pong switches between threads in real fast
 succession. And alot of signals flowing.. didn't know that.

> > does it work? ;) I'm only worried about pipe buffering. does the
>
> Yes it works.
> pipes are immediate. If you want to buffer you must do so in userspace.

 thats nice. I dunno why, but I had faint memory that in Solaris this had
 some quirk that made be drop this idea some time ago...
Received on Sat Jan 04 2003 - 06:50:08 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:19:05 MST