On Thu, Aug 19, 2021 at 9:37 AM Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > > On Fri, Aug 13, 2021, David Matlack wrote: > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > > index 3352312ab1c9..fb2c95e8df00 100644 > > --- a/arch/x86/kvm/mmu/mmu.c > > +++ b/arch/x86/kvm/mmu/mmu.c > > @@ -2890,7 +2890,7 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm, > > > > void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) > > { > > - struct kvm_memory_slot *slot; > > + struct kvm_memory_slot *slot = fault->slot; > > kvm_pfn_t mask; > > > > fault->huge_page_disallowed = fault->exec && fault->nx_huge_page_workaround_enabled; > > @@ -2901,8 +2901,7 @@ void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault > > if (is_error_noslot_pfn(fault->pfn) || kvm_is_reserved_pfn(fault->pfn)) > > return; > > > > - slot = gfn_to_memslot_dirty_bitmap(vcpu, fault->gfn, true); > > - if (!slot) > > + if (kvm_slot_dirty_track_enabled(slot)) > > This is unnecessarily obfuscated. Ugh. It's pure luck too. I meant to check if the slot is null here. > It relies on the is_error_noslot_pfn() to > ensure fault->slot is valid, but the only reason that helper is used is because > it was the most efficient code when slot wasn't available. IMO, this would be > better: > > if (!slot || kvm_slot_dirty_track_enabled(slot)) > return; > > if (kvm_is_reserved_pfn(fault->pfn)) > return; That looks reasonable to me. I can send a patch next week with this change. > > On a related topic, a good follow-up to this series would be to pass @fault into > the prefetch helpers, and modify the prefetch logic to re-use fault->slot and > refuse to prefetch across memslot boundaries. That would eliminate all users of > gfn_to_memslot_dirty_bitmap() and allow us to drop that abomination.