On Sat, 2013-09-14 at 14:24 +0200, Manfred Spraul wrote: > Hi Mike, Hi, > > Index: linux-2.6/ipc/sem.c > > =================================================================== > > --- linux-2.6.orig/ipc/sem.c > > +++ linux-2.6/ipc/sem.c > > @@ -247,11 +256,22 @@ static inline int sem_lock(struct sem_ar > > */ > > lock_array: > > spin_lock(&sma->sem_perm.lock); > > + wait_array: > > for (i = 0; i < sma->sem_nsems; i++) { > > - struct sem *sem = sma->sem_base + i; > > + sem = sma->sem_base + i; > > +#ifdef CONFIG_PREEMPT_RT_BASE > > + if (spin_is_locked(&sem->lock)) > > +#endif > > spin_unlock_wait(&sem->lock); > > } > > > I don't like this part of the change: None of it is pretty, but the livelock is even less pretty ;-) > It reads like a micro-optimization for spin_unlock_wait() within the > ipc/sem.c code. It's exactly that, hope to hammer fewer locks. > If spin_unlock_wait() for CONFIG_PREEMPT_RT_BASE is broken, then the > implementation of spin_unlock_wait() should be fixed. But it's not broken, taking the lock lets PI see/fix inversion. Preemptible locks are (necessary) evil incarnate. -Mike -- To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html