On Tue, Dec 13, 2022 at 03:30:30AM +0000, Sean Christopherson wrote: > Move the hugepage adjust, tracepoint, and RCU (un)lock logic out of > kvm_tdp_mmu_map() and into its sole caller, kvm_tdp_mmu_page_fault(), to > eliminate the gotos used to bounce through rcu_read_unlock() when bailing > from the walk. > > Opportunistically mark kvm_mmu_hugepage_adjust() as static as > kvm_tdp_mmu_map() was the only external user. > > No functional change intended. > > Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx> > --- > arch/x86/kvm/mmu/mmu.c | 9 ++++++++- > arch/x86/kvm/mmu/mmu_internal.h | 1 - > arch/x86/kvm/mmu/tdp_mmu.c | 22 ++++------------------ > 3 files changed, 12 insertions(+), 20 deletions(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 254bc46234e0..99c40617d325 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -3085,7 +3085,8 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm, > return min(host_level, max_level); > } > > -void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) > +static void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, > + struct kvm_page_fault *fault) > { > struct kvm_memory_slot *slot = fault->slot; > kvm_pfn_t mask; > @@ -4405,7 +4406,13 @@ static int kvm_tdp_mmu_page_fault(struct kvm_vcpu *vcpu, > if (is_page_fault_stale(vcpu, fault)) > goto out_unlock; > > + kvm_mmu_hugepage_adjust(vcpu, fault); Can you also move the call to kvm_mmu_hugepage_adjust() from direct_map() to direct_page_fault()? I do think it's worth the maintenence burden to keep those functions consistent. > + > + trace_kvm_mmu_spte_requested(fault); > + > + rcu_read_lock(); > r = kvm_tdp_mmu_map(vcpu, fault); > + rcu_read_unlock(); I would prefer to keep these in tdp_mmu.c, to reduce the amount of TDP MMU details that bleed into mmu.c (RCU) and for consistency with other TDP MMU APIs that don't require the caller to acquire RCU. This will also be helpful for the Common MMU, as the tracepoint and RCU will be common. e.g. static int __kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { ... } int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { int r; trace_kvm_mmu_spte_requested(fault); rcu_read_lock(); r = __kvm_tdp_mmu_map(vcpu, fault); rcu_read_unlock(); return r; }