* Sasha Levin <levinsasha928@xxxxxxxxx> wrote: > On Fri, 2011-05-27 at 12:36 +0200, Ingo Molnar wrote: > > * Sasha Levin <levinsasha928@xxxxxxxxx> wrote: > > > > > I see that in liburcu there is an implementation of a rcu linked > > > list but no implementation of a rb-tree. > > > > Another approach would be, until the RCU interactions are sorted out, > > to implement a 'big reader lock' thing that is completely lockless on > > the read side (!). > > > > It works well if the write side is expensive, but very rare: which is > > certainly the case for these ioport registration data structures used > > in the mmio event demux fast path! > > > > The write_lock() side signals all worker threads to finish whatever > > they are doing now and to wait for the write_unlock(). Then the > > modification can be done and the worker threads can be resumed. > > > > This can be updated to RCU later on without much trouble. > > > > The advantage is that this could be implemented with the existing > > thread-pool primitives straight away i think, we'd need five > > primitives: > > > > bread_lock(); > > bread_unlock(); > > bwrite_lock(); > > bwrite_lock(); > > > > brlock_init(); > > > > and a data type: > > > > struct brlock; > > > > bread_lock()/bread_unlock() is basically just a compiler barrier. > > bwrite_lock() stops all (other) worker threads. > > bwrite_unlock() resumes them. > > > > That's all - should be 50 lines of code, unless i'm missing something > > :-) > > > > Thanks, > > > > Ingo > > Isn't there something similar to this in the kernel? Yeah, there's include/linux/lglock.h. > I prefer not implementing a new lock type at the moment mostly because > we're not tackling a bug or an immediate problem, we don't really need > locking at the moment (we add all devices at init and don't support > hotplug yet). So I'd rather not write code just to solve it faster but > have it thrown away later. We don't have to throw it away: RCU is rather complex to pull off here, and in many cases, where writes are very rare, brlocks are the best solution even with RCU present. Thanks, Ingo -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html