On 02/14/2012 09:43 PM, Marcelo Tosatti wrote: > Also it should not be necessary for these flushes to be inside mmu_lock > on EPT/NPT case (since there is no write protection there). We do write protect with TDP, if nested virt is active. The question is whether we have indirect pages or not, not whether TDP is active or not (even without TDP, if you don't enable paging in the guest, you don't have to write protect). > But it would > be awkward to differentiate the unlock position based on EPT/NPT. > I would really like to move the IPI back out of the lock. How about something like a sequence lock: spin_lock(mmu_lock) need_flush = write_protect_stuff(); atomic_add(kvm->want_flush_counter, need_flush); spin_unlock(mmu_lock); while ((done = atomic_read(kvm->done_flush_counter)) < (want = atomic_read(kvm->want_flush_counter)) { kvm_make_request(flush) atomic_cmpxchg(kvm->done_flush_counter, done, want) } This (or maybe a corrected and optimized version) ensures that any need_flush cannot pass the while () barrier, no matter which thread encounters it first. However it violates the "do not invent new locking techniques" commandment. Can we map it to some existing method? -- error compiling committee.c: too many arguments to function -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html