On Fri, Feb 15, 2019 at 12:32 AM Paolo Bonzini <pbonzini at redhat.com> wrote: > > On 02/02/19 02:38, lantianyu1986 at gmail.com wrote: > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > > index ce770b446238..70cafd3f95ab 100644 > > --- a/arch/x86/kvm/mmu.c > > +++ b/arch/x86/kvm/mmu.c > > @@ -2918,6 +2918,9 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, > > > > if (level > PT_PAGE_TABLE_LEVEL) > > spte |= PT_PAGE_SIZE_MASK; > > + > > + sp->last_level = is_last_spte(spte, level); > > Wait, I wasn't thinking straight. If a struct kvm_mmu_page exists, it > is never the last level. Page table entries for the last level do not > have a struct kvm_mmu_page. > > Therefore you don't need the flag after all. I suspect your > calculations in patch 2 are off by one, and you actually need > > hlist_for_each_entry(sp, range->flush_list, flush_link) { > int pages = KVM_PAGES_PER_HPAGE(sp->role.level + 1); > ... > } > > For example, if sp->role.level is 1 then the struct kvm_mmu_page is for > a page containing PTEs and covers an area of 2 MiB. Yes, you are right. Thanks to point out and will fix. The last_level flag is to avoid adding middle page node(e.g, PGD, PMD) into flush list. The address range will be duplicated if adding both leaf, node and middle node into flush list. > > Thanks, > > Paolo > > > if (tdp_enabled) > > spte |= kvm_x86_ops->get_mt_mask(vcpu, gfn, > > kvm_is_mmio_pfn(pfn)); > -- Best regards Tianyu Lan