Sean Christopherson <sean.j.christopherson@xxxxxxxxx> writes: > Explicitly track the EPTP that is common to all vCPUs instead of > grabbing vCPU0's EPTP when invoking Hyper-V's paravirt TLB flush. > Tracking the EPTP will allow optimizing the checks when loading a new > EPTP and will also allow dropping ept_pointer_match, e.g. by marking > the common EPTP as invalid. > > This also technically fixes a bug where KVM could theoretically flush an > invalid GPA if all vCPUs have an invalid root. In practice, it's likely > impossible to trigger a remote TLB flush in such a scenario. In any > case, the superfluous flush is completely benign. > > Signed-off-by: Sean Christopherson <sean.j.christopherson@xxxxxxxxx> > --- > arch/x86/kvm/vmx/vmx.c | 19 ++++++++----------- > arch/x86/kvm/vmx/vmx.h | 1 + > 2 files changed, 9 insertions(+), 11 deletions(-) > > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c > index bcc097bb8321..6d53bcc4a1a9 100644 > --- a/arch/x86/kvm/vmx/vmx.c > +++ b/arch/x86/kvm/vmx/vmx.c > @@ -486,6 +486,7 @@ static void check_ept_pointer_match(struct kvm *kvm) > } > } > > + to_kvm_vmx(kvm)->hv_tlb_eptp = tmp_eptp; I was going to suggest you reset hv_tlb_eptp to INVALID_PAGE in case this check fails (couple lines above) but this function is gone later in the series and the replacement code in hv_remote_flush_tlb_with_range() does exactly that. > to_kvm_vmx(kvm)->ept_pointers_match = EPT_POINTERS_MATCH; > } > > @@ -498,21 +499,18 @@ static int kvm_fill_hv_flush_list_func(struct hv_guest_mapping_flush_list *flush > range->pages); > } > > -static inline int __hv_remote_flush_tlb_with_range(struct kvm *kvm, > - struct kvm_vcpu *vcpu, struct kvm_tlb_range *range) > +static inline int hv_remote_flush_eptp(u64 eptp, struct kvm_tlb_range *range) > { > - u64 ept_pointer = to_vmx(vcpu)->ept_pointer; > - > /* > * FLUSH_GUEST_PHYSICAL_ADDRESS_SPACE hypercall needs address > * of the base of EPT PML4 table, strip off EPT configuration > * information. > */ > if (range) > - return hyperv_flush_guest_mapping_range(ept_pointer & PAGE_MASK, > + return hyperv_flush_guest_mapping_range(eptp & PAGE_MASK, > kvm_fill_hv_flush_list_func, (void *)range); > else > - return hyperv_flush_guest_mapping(ept_pointer & PAGE_MASK); > + return hyperv_flush_guest_mapping(eptp & PAGE_MASK); > } > > static int hv_remote_flush_tlb_with_range(struct kvm *kvm, > @@ -530,12 +528,11 @@ static int hv_remote_flush_tlb_with_range(struct kvm *kvm, > kvm_for_each_vcpu(i, vcpu, kvm) { > /* If ept_pointer is invalid pointer, bypass flush request. */ > if (VALID_PAGE(to_vmx(vcpu)->ept_pointer)) > - ret |= __hv_remote_flush_tlb_with_range( > - kvm, vcpu, range); > + ret |= hv_remote_flush_eptp(to_vmx(vcpu)->ept_pointer, > + range); > } > - } else { > - ret = __hv_remote_flush_tlb_with_range(kvm, > - kvm_get_vcpu(kvm, 0), range); > + } else if (VALID_PAGE(to_kvm_vmx(kvm)->hv_tlb_eptp)) { > + ret = hv_remote_flush_eptp(to_kvm_vmx(kvm)->hv_tlb_eptp, range); I assume Hyper-V will swallow IVALID_PAGE without complaining much but it seems pointless to do anything in this case indeed. > } > > spin_unlock(&to_kvm_vmx(kvm)->ept_pointer_lock); > diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h > index 5961cb897125..3d557a065c01 100644 > --- a/arch/x86/kvm/vmx/vmx.h > +++ b/arch/x86/kvm/vmx/vmx.h > @@ -301,6 +301,7 @@ struct kvm_vmx { > bool ept_identity_pagetable_done; > gpa_t ept_identity_map_addr; > > + hpa_t hv_tlb_eptp; > enum ept_pointers_status ept_pointers_match; > spinlock_t ept_pointer_lock; > }; Reviewed-by: Vitaly Kuznetsov <vkuznets@xxxxxxxxxx> -- Vitaly