On Tue, Mar 08, 2022, Nikunj A Dadhania wrote: > Both TDP MMU and legacy MMU do hugepage adjust in the mapping routine. > Adjust the pfn early in the common code. This will be used by the > following patches for pinning the pages. > > No functional change intended. There is a functional change here, as kvm_mmu_hugepage_adjust() is now called without mmu_lock being held. That really shouldn't be problematic, but sadly KVM very, very subtly relies on calling lookup_address_in_mm() while holding mmu_lock _and_ after checking mmu_notifier_retry_hva(). https://lore.kernel.org/all/CAL715WL7ejOBjzXy9vbS_M2LmvXcC-CxmNr+oQtCZW0kciozHA@xxxxxxxxxxxxxx > Signed-off-by: Nikunj A Dadhania <nikunj@xxxxxxx> > --- > arch/x86/kvm/mmu/mmu.c | 4 ++-- > arch/x86/kvm/mmu/tdp_mmu.c | 2 -- > 2 files changed, 2 insertions(+), 4 deletions(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 8e24f73bf60b..db1feecd6fed 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -2940,8 +2940,6 @@ static int __direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) > int ret; > gfn_t base_gfn = fault->gfn; > > - kvm_mmu_hugepage_adjust(vcpu, fault); > - > trace_kvm_mmu_spte_requested(fault); > for_each_shadow_entry(vcpu, fault->addr, it) { > /* > @@ -4035,6 +4033,8 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault > > r = RET_PF_RETRY; > > + kvm_mmu_hugepage_adjust(vcpu, fault); > + > if (is_tdp_mmu_fault) > read_lock(&vcpu->kvm->mmu_lock); > else > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c > index bc9e3553fba2..e03bf59b2f81 100644 > --- a/arch/x86/kvm/mmu/tdp_mmu.c > +++ b/arch/x86/kvm/mmu/tdp_mmu.c > @@ -959,8 +959,6 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) > u64 new_spte; > int ret; > > - kvm_mmu_hugepage_adjust(vcpu, fault); > - > trace_kvm_mmu_spte_requested(fault); > > rcu_read_lock(); > -- > 2.32.0 >