On Fri, Oct 28, 2022, David Matlack wrote: > On Fri, Oct 28, 2022 at 2:07 PM Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > > > > On Fri, Oct 28, 2022, David Matlack wrote: > > > I'll experiment with a more accurate solution. i.e. have the recovery > > > worker lookup the memslot for each SP and check if it has dirty > > > logging enabled. Maybe the increase in CPU usage won't be as bad as I > > > thought. > > > > If you end up grabbing the memslot, use kvm_mmu_max_mapping_level() instead of > > checking only dirty logging. The way KVM will avoid zapping shadow pages that > > could have been NX huge pages when they were created, but can no longer be NX huge > > pages due to something other than dirty logging, e.g. because the gfn is being > > shadow for nested TDP. > > kvm_mmu_max_mapping_level() doesn't check if dirty logging is enabled Gah, I forgot that kvm_mmu_hugepage_adjust() does a one-off check for dirty logging instead of the info being fed into slot->arch.lpage_info. Never mind.