On Sat, Mar 26, 2022, Mingwei Zhang wrote: > On Fri, Mar 25, 2022, Sean Christopherson wrote: > > On Sun, Mar 13, 2022, Mingwei Zhang wrote: > > > On Thu, Mar 3, 2022 at 11:39 AM Paolo Bonzini <pbonzini@xxxxxxxxxx> wrote: > > > > @@ -898,13 +879,13 @@ static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, > > > > * SPTEs have been cleared and a TLB flush is needed before releasing the > > > > * MMU lock. > > > > */ > > > > -bool __kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, int as_id, gfn_t start, > > > > - gfn_t end, bool can_yield, bool flush) > > > > +bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, int as_id, gfn_t start, gfn_t end, > > > > + bool can_yield, bool flush) > > > > { > > > > struct kvm_mmu_page *root; > > > > > > > > for_each_tdp_mmu_root_yield_safe(kvm, root, as_id) > > > > - flush = zap_gfn_range(kvm, root, start, end, can_yield, flush); > > > > + flush = tdp_mmu_zap_leafs(kvm, root, start, end, can_yield, false); > > > > > > hmm, I think we might have to be very careful here. If we only zap > > > leafs, then there could be side effects. For instance, the code in > > > disallowed_hugepage_adjust() may not work as intended. If you check > > > the following condition in arch/x86/kvm/mmu/mmu.c:2918 > > > > > > if (cur_level > PG_LEVEL_4K && > > > cur_level == fault->goal_level && > > > is_shadow_present_pte(spte) && > > > !is_large_pte(spte)) { > > > > > > If we previously use 4K mappings in this range due to various reasons > > > (dirty logging etc), then afterwards, we zap the range. Then the guest > > > touches a 4K and now we should map the range with whatever the maximum > > > level we can for the guest. > > > > > > However, if we just zap only the leafs, then when the code comes to > > > the above location, is_shadow_present_pte(spte) will return true, > > > since the spte is a non-leaf (say a regular PMD entry). The whole if > > > statement will be true, then we never allow remapping guest memory > > > with huge pages. > > > > But that's at worst a performance issue, and arguably working as intended. The > > zap in this case is never due to the _guest_ unmapping the pfn, so odds are good > > the guest will want to map back in the same pfns with the same permissions. > > Zapping shadow pages so that the guest can maybe create a hugepage may end up > > being a lot of extra work for no benefit. Or it may be a net positive. Either > > way, it's not a functional issue. > > This should be a performance bug instead of a functional one. But it > does affect both dirty logging (before Ben's early page promotion) and > our demand paging. I'd buy the argument that KVM should zap shadow pages when zapping specifically to recreate huge pages, but that's a different path entirely. Disabling of dirty logging uses a dedicated path, zap_collapsible_spte_range(). > So I proposed the fix in here: > > https://lore.kernel.org/lkml/20220323184915.1335049-2-mizhang@xxxxxxxxxx/T/#me78d50ffac33f4f418432f7b171c50630414ef28 > > If we see memory corruptions, I bet it could only be that we miss some > TLB flushes, since this patch series is basically trying to avoid > immediate TLB flushing by simply changing ASID (assigning new root). Ya, it was a lost TLB flush goof. My apologaies for not cc'ing you on the patch. https://lore.kernel.org/all/20220325230348.2587437-1-seanjc@xxxxxxxxxx > To debug, maybe force the TLB flushes after zap_gfn_range and see if the > problem still exist? > >