On 02/24/2016 09:17 PM, Paolo Bonzini wrote:
kvm_mmu_get_page is the only caller of kvm_sync_page_transient and kvm_sync_pages. Moving the handling of the invalid_list there removes the need for the underdocumented kvm_sync_page_transient function. Signed-off-by: Paolo Bonzini <pbonzini@xxxxxxxxxx> --- Guangrong, at this point I am confused about why kvm_sync_page_transient didn't clear sp->unsync. Do you remember? Or perhaps kvm_mmu_get_page could just call kvm_sync_page now?
It is the optimization to reduce write-protect as changing unsync to sync need to write-protect the page and sync all sptes pointing to the same gfn. However, after syncing the content between unsync-ed spte and guest pte, we can reuse this spte perfectly.
Also, can you explain the need_sync variable in kvm_mmu_get_page?
This is because we need to to protect the semanteme of 'unsync spte' as only the spte on last level (level = 1) can be unsync so that if a spte on the upper level is created we should eliminate all the unsync sptes pointing to the same gfn. As you have already merged this patchset to the kvm tree, i will post a patch to comment these cases to make the code be more understandable. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html