Re: [PATCH 1/2] KVM: mmu_notifier: Flush TLBs before releasing mmu_lock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 02/15/2012 05:47 PM, Avi Kivity wrote:

> On 02/15/2012 11:18 AM, Avi Kivity wrote:
>> On 02/14/2012 09:43 PM, Marcelo Tosatti wrote:
>>> Also it should not be necessary for these flushes to be inside mmu_lock
>>> on EPT/NPT case (since there is no write protection there). 
>>
>> We do write protect with TDP, if nested virt is active.  The question is
>> whether we have indirect pages or not, not whether TDP is active or not
>> (even without TDP, if you don't enable paging in the guest, you don't
>> have to write protect).
>>
>>> But it would
>>> be awkward to differentiate the unlock position based on EPT/NPT.
>>>
>>
>> I would really like to move the IPI back out of the lock.
>>
>> How about something like a sequence lock:
>>
>>
>>     spin_lock(mmu_lock)
>>     need_flush = write_protect_stuff();
>>     atomic_add(kvm->want_flush_counter, need_flush);
>>     spin_unlock(mmu_lock);
>>
>>     while ((done = atomic_read(kvm->done_flush_counter)) < (want =
>> atomic_read(kvm->want_flush_counter)) {
>>           kvm_make_request(flush)
>>           atomic_cmpxchg(kvm->done_flush_counter, done, want)
>>     }
>>
>> This (or maybe a corrected and optimized version) ensures that any
>> need_flush cannot pass the while () barrier, no matter which thread
>> encounters it first.  However it violates the "do not invent new locking
>> techniques" commandment.  Can we map it to some existing method?
> 
> There is no need to advance 'want' in the loop.  So we could do
> 
> /* must call with mmu_lock held */
> void kvm_mmu_defer_remote_flush(kvm, need_flush)
> {
>       if (need_flush)
>             ++kvm->flush_counter.want;
> }
> 
> /* may call without mmu_lock */
> void kvm_mmu_commit_remote_flush(kvm)
> {
>       want = ACCESS_ONCE(kvm->flush_counter.want)
>       while ((done = atomic_read(kvm->flush_counter.done) < want) {
>             kvm_make_request(flush)
>             atomic_cmpxchg(kvm->flush_counter.done, done, want)
>       }
> }
> 


Hmm, we already have kvm->tlbs_dirty, so, we can do it like this:

#define SPTE_INVALID_UNCLEAN (1 << 63 )

in invalid page path:
lock mmu_lock
if (spte is invalid)
	kvm->tlbs_dirty |= SPTE_INVALID_UNCLEAN;
need_tlb_flush = kvm->tlbs_dirty;
unlock mmu_lock
if (need_tlb_flush)
	kvm_flush_remote_tlbs()

And in page write-protected path:
lock mmu_lock
	if (it has spte change to readonly |
	      kvm->tlbs_dirty & SPTE_INVALID_UNCLEAN)
		kvm_flush_remote_tlbs()
unlock mmu_lock

How about this?

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux