On Fri, Oct 28, 2022 at 2:07 PM Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > > On Fri, Oct 28, 2022, David Matlack wrote: > > I'll experiment with a more accurate solution. i.e. have the recovery > > worker lookup the memslot for each SP and check if it has dirty > > logging enabled. Maybe the increase in CPU usage won't be as bad as I > > thought. > > If you end up grabbing the memslot, use kvm_mmu_max_mapping_level() instead of > checking only dirty logging. The way KVM will avoid zapping shadow pages that > could have been NX huge pages when they were created, but can no longer be NX huge > pages due to something other than dirty logging, e.g. because the gfn is being > shadow for nested TDP. kvm_mmu_max_mapping_level() doesn't check if dirty logging is enabled and does the unnecessary work of checking the host mapping level (which requires knowing the pfn). I could refactor kvm_mmu_hugepage_adjust() and kvm_mmu_max_mapping_level() though to achieve what you suggest. Specifically, when recovering NX Huge Pages, check if dirty logging is enabled and if a huge page is disallowed (lpage_info_slot), and share that code with the fault handler.