On 02/10/2012 03:21 PM, Takuya Yoshikawa wrote: > (2012/02/10 15:55), Xiao Guangrong wrote: >> On 02/10/2012 02:29 PM, Takuya Yoshikawa wrote: >> >>> diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h >>> index 1561028..69d06f5 100644 >>> --- a/arch/x86/kvm/paging_tmpl.h >>> +++ b/arch/x86/kvm/paging_tmpl.h >>> @@ -682,6 +682,7 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva) >>> mmu_topup_memory_caches(vcpu); >>> >>> spin_lock(&vcpu->kvm->mmu_lock); >>> + >>> for_each_shadow_entry(vcpu, gva, iterator) { >>> level = iterator.level; >>> sptep = iterator.sptep; >>> @@ -697,8 +698,7 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva) >>> pte_gpa = FNAME(get_level1_sp_gpa)(sp); >>> pte_gpa += (sptep - sp->spt) * sizeof(pt_element_t); >>> >>> - if (mmu_page_zap_pte(vcpu->kvm, sp, sptep)) >>> - kvm_flush_remote_tlbs(vcpu->kvm); >>> + mmu_page_zap_pte(vcpu->kvm, sp, sptep); >>> >>> if (!rmap_can_add(vcpu)) >>> break; >>> @@ -713,6 +713,8 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva) >>> if (!is_shadow_present_pte(*sptep) || !sp->unsync_children) >>> break; >>> } >>> + >>> + kvm_flush_remote_tlbs(vcpu->kvm); >>> spin_unlock(&vcpu->kvm->mmu_lock); >> >> >> It is obvious wrong, i do not think all tlbs always need be flushed... >> > > What do you mean by "obvious wrong" ? > In the current code, all tlbs are flushed only when s spte is zapped, but after your change, they are always changed. > Even before this patch, we were always flushing TLBs from the caller. > Oh, could you please tell me where tlbs can be flushed except when a spte is zapped in this path? > I have a question: your patches apparently changed the timing of TLB flush > but all I could see from the changelogs were: > > > KVM: MMU: cleanup FNAME(invlpg) > > Directly Use mmu_page_zap_pte to zap spte in FNAME(invlpg), also remove the > same code between FNAME(invlpg) and FNAME(sync_page) > This patch dose not change the logic, the tlb flushed time is also not changed, it just directly call kvm_flush_remote_tlbs when a spte is zapped. > > KVM: MMU: fast prefetch spte on invlpg path > > Fast prefetch spte for the unsync shadow page on invlpg path > > This patch did not change the code when kvm_flush_remote_tlbs is called. Where cause your confused? -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html