On 03/29/2012 11:49 PM, Avi Kivity wrote: > On 03/29/2012 11:25 AM, Xiao Guangrong wrote: >> Using PTE_LIST_WRITE_PROTECT bit in rmap store the write-protect status to >> avoid unnecessary shadow page walking > > Does kvm_set_pte_rmapp() need adjustment? > Yes, in kvm_set_pte_rmapp(), if the page is host write-protected, it will set this bit: static void host_page_write_protect(u64 *spte, unsigned long *rmapp) { if (!(*spte & SPTE_HOST_WRITEABLE) && !(*rmapp & PTE_LIST_WRITE_PROTECT)) *rmapp |= PTE_LIST_WRITE_PROTECT; } It is very useful for fast page fault path to avoid useless shadow page table waling if KSM is enabled. >> Also if no shadow page is indirect, the page is write-free >> >> >> + if (!vcpu->kvm->arch.indirect_shadow_pages) >> + return 0; >> + > Best in its own little patch. > Okay, will split it into a little patch. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html