On Saturday 23 January 2010 10:54:22 am Attilio Rao wrote:
> Author: attilio
> Date: Sat Jan 23 15:54:21 2010
> New Revision: 202889
> URL: http://svn.freebsd.org/changeset/base/202889
> - Fix a race in sched_switch() of sched_4bsd.
> In the case of the thread being on a sleepqueue or a turnstile, the
> sched_lock was acquired (without the aid of the td_lock interface) and
> the td_lock was dropped. This was going to break locking rules on other
> threads willing to access to the thread (via the td_lock interface) and
> modify his flags (allowed as long as the container lock was different
> by the one used in sched_switch).
> In order to prevent this situation, while sched_lock is acquired there
> the td_lock gets blocked. 
> - Merge the ULE's internal function thread_block_switch() into the global
> thread_lock_block() and make the former semantic as the default for
> thread_lock_block(). This means that thread_lock_block() will not
> disable interrupts when called (and consequently thread_unlock_block()
> will not re-enabled them when called). This should be done manually
> when necessary.
> Note, however, that ULE's thread_unblock_switch() is not reaped
> because it does reflect a difference in semantic due in ULE (the
> td_lock may not be necessarilly still blocked_lock when calling this).
> While asymmetric, it does describe a remarkable difference in semantic
> that is good to keep in mind.
Does this affect the various #ifdef's for handling the third argument to
cpu_switch()? E.g. does 4BSD need to spin if td_lock is &blocked_lock?
Also, BLOCK_SPIN() on x86 is non-optimal. It should not do cmpxchg in a loop.
Instead, it should do cmp in a loop, and if the cmp succeeds, then try