On Thu, Sep 15, 2022 at 4:47 AM Liam Ni <zhiguangni01@xxxxxxxxx> wrote: > > On Tue, 13 Sept 2022 at 20:13, Hou Wenlong <houwenlong.hwl@xxxxxxxxxxxx> wrote: > > > > On Thu, Sep 08, 2022 at 01:43:54AM +0800, David Matlack wrote: > > > On Wed, Aug 24, 2022 at 05:29:18PM +0800, Hou Wenlong wrote: > > > > The spte pointing to the children SP is dropped, so the > > > > whole gfn range covered by the children SP should be flushed. > > > > Although, Hyper-V may treat a 1-page flush the same if the > > > > address points to a huge page, it still would be better > > > > to use the correct size of huge page. Also introduce > > > > a helper function to do range-based flushing when a direct > > > > SP is dropped, which would help prevent future buggy use > > > > of kvm_flush_remote_tlbs_with_address() in such case. > > > > > > > > Fixes: c3134ce240eed ("KVM: Replace old tlb flush function with new one to flush a specified range.") > > > > Suggested-by: David Matlack <dmatlack@xxxxxxxxxx> > > > > Signed-off-by: Hou Wenlong <houwenlong.hwl@xxxxxxxxxxxx> > > > > --- > > > > arch/x86/kvm/mmu/mmu.c | 10 +++++++++- > > > > 1 file changed, 9 insertions(+), 1 deletion(-) > > > > > > > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > > > > index e418ef3ecfcb..a3578abd8bbc 100644 > > > > --- a/arch/x86/kvm/mmu/mmu.c > > > > +++ b/arch/x86/kvm/mmu/mmu.c > > > > @@ -260,6 +260,14 @@ void kvm_flush_remote_tlbs_with_address(struct kvm *kvm, > > > > kvm_flush_remote_tlbs_with_range(kvm, &range); > > > > } > > > > > > > > +/* Flush all memory mapped by the given direct SP. */ > > > > +static void kvm_flush_remote_tlbs_direct_sp(struct kvm *kvm, struct kvm_mmu_page *sp) > > > > +{ > > > > + WARN_ON_ONCE(!sp->role.direct); > > > > + kvm_flush_remote_tlbs_with_address(kvm, sp->gfn, > > > > + KVM_PAGES_PER_HPAGE(sp->role.level + 1)); > > Do we need "+1" here? sp->role.level=1 means 4k page. > I think here should be “KVM_PAGES_PER_HPAGE(sp->role.level)” Yes we need the "+ 1" here. kvm_flush_remote_tlbs_direct_sp() must flush all memory mapped by the shadow page, which is equivalent to the amount of memory mapped by a huge page at the next higher level. For example, a shadow page with role.level == PG_LEVEL_4K maps 2 MiB of the guest physical address space since 512 PTEs x 4KiB per PTE = 2MiB. > > > > > > > nit: I think it would make sense to introduce > > > kvm_flush_remote_tlbs_gfn() in this patch since you are going to > > > eventually use it here anyway. > > > > > OK, I'll do it in the next version. Thanks! > > > > > > +} > > > > + > > > > static void mark_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, u64 gfn, > > > > unsigned int access) > > > > { > > > > @@ -2341,7 +2349,7 @@ static void validate_direct_spte(struct kvm_vcpu *vcpu, u64 *sptep, > > > > return; > > > > > > > > drop_parent_pte(child, sptep); > > > > - kvm_flush_remote_tlbs_with_address(vcpu->kvm, child->gfn, 1); > > > > + kvm_flush_remote_tlbs_direct_sp(vcpu->kvm, child); > > > > } > > > > } > > > > > > > > -- > > > > 2.31.1 > > > >