Re: [PATCH v4 06/10] KVM: MMU: fast path of handling guest page fault

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 05/03/2012 05:07 AM, Marcelo Tosatti wrote:


>> 'entry' is not a problem since it is from atomically read-write as
>> mentioned above, i need change this code to:
>>
>> 		/*
>> 		 * Optimization: for pte sync, if spte was writable the hash
>> 		 * lookup is unnecessary (and expensive). Write protection
>> 		 * is responsibility of mmu_get_page / kvm_sync_page.
>> 		 * Same reasoning can be applied to dirty page accounting.
>> 		 */
>> 		if (!can_unsync && is_writable_pte(entry) /* Use 'entry' instead of '*sptep'. */
>> 			goto set_pte
>>    ......
>>
>>
>>          if (is_writable_pte(entry) && !is_writable_pte(spte)) /* Use 'spte' instead of '*sptep'. */
>>                  kvm_flush_remote_tlbs(vcpu->kvm);
> 
> What is of more importance than the ability to verify that this or that
> particular case are ok at the moment is to write code in such a way that
> its easy to verify that it is correct.
> 
> Thus the suggestion above:
> 
> "scattered all over (as mentioned before, i think a pattern of read spte
> once, work on top of that, atomically write and then deal with results
> _everywhere_ (where mmu lock is held) is more consistent."
> 


Marcelo, thanks for your time to patiently review/reply my mail.

I am confused with ' _everywhere_ ', it means all of the path read/update
spte? why not only verify the path which depends on is_writable_pte()?

For the reason of "its easy to verify that it is correct"? But these
paths are safe since it is not care PT_WRITABLE_MASK at all. What these
paths care is the Dirty-bit and Accessed-bit are not lost, that is why
we always treat the spte as "volatile" if it is can be updated out of
mmu-lock.

For the further development? We can add the delta comment for
is_writable_pte() to warn the developer use it more carefully.

It is also very hard to verify spte everywhere. :(

Actually, the current code to care PT_WRITABLE_MASK is just for
tlb flush, may be we can fold it into mmu_spte_update.
[
  There are tree ways to modify spte, present -> nonpresent, nonpresent -> present,
  present -> present.

  But we only need care present -> present for lockless.
]

/*
 * return true means we need flush tlbs caused by changing spte from writeable
 * to read-only.
 */
bool mmu_update_spte(u64 *sptep, u64 spte)
{
	u64 last_spte, old_spte = *sptep;
	bool flush = false;

	last_spte = xchg(sptep, spte);

	if ((is_writable_pte(last_spte) ||
	      spte_has_updated_lockless(old_spte, last_spte)) &&
	         !is_writable_pte(spte) )
		flush = true;

	.... track Drity/Accessed bit ...


	return flush		
}

Furthermore, the style of "if (spte-has-changed) goto beginning" is feasible
in set_spte since this path is a fast path. (i can speed up mmu_need_write_protect)


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux