On Fri, 2008-05-30 at 11:56 -0700, Christoph Lameter wrote: > On Fri, 30 May 2008, Peter Zijlstra wrote: > > > Yes, I get that, but for instance kmem_cache_cpu usage would require > > horrible long preempt off sections, hence we add a lock and manage > > consistency using that lock instead of strict per-cpu and preempt > > disable. > > Really? Where is that horrible long preempt off section? I thought they > were all short and we use semaphores when looping over all slabs. Remember, horribly long on -rt is in the order of ~30us. See for instance the path: flush_slab() slab_lock() <-- bit_spinlock deactive_slab() loop over objects unfreeze_slab() add_partial() spin_lock(&n->list_lock) <-- another lock all of that runs with preempt disabled, and worse it can spin for some indefinite amount of time. That is totally unacceptable on -rt, so we need to make it all preemptible and use sleeping locks (IRQ inversion can't happen, for the only thing that runs in hardirq is try_to_wake_up()). -- To unsubscribe from this list: send the line "unsubscribe linux-arch" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html