On Tue, Jan 12, 2021, Ben Gardon wrote: > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index ba296ad051c3..280d7cd6f94b 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -5471,6 +5471,11 @@ void kvm_mmu_init_vm(struct kvm *kvm) > > kvm_mmu_init_tdp_mmu(kvm); > > + if (kvm->arch.tdp_mmu_enabled) > + rwlock_init(&kvm->arch.mmu_rwlock); > + else > + spin_lock_init(&kvm->arch.mmu_lock); Rather than use different lock types, what if we always use a rwlock, but only acquire it for read when handling page faults for TDP MMUs? That would significantly reduce the amount of boilerplate conditionals. The fast paths for write_lock() and spin_lock() are nearly identical, and I would hope any differences in the slow paths are hidden in the noise. > + > node->track_write = kvm_mmu_pte_write; > node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot; > kvm_page_track_register_notifier(kvm, node); > @@ -6074,3 +6079,87 @@ void kvm_mmu_pre_destroy_vm(struct kvm *kvm) > if (kvm->arch.nx_lpage_recovery_thread) > kthread_stop(kvm->arch.nx_lpage_recovery_thread); > } > + > +void kvm_mmu_lock_shared(struct kvm *kvm) > +{ > + WARN_ON(!kvm->arch.tdp_mmu_enabled); > + read_lock(&kvm->arch.mmu_rwlock); > +} > + > +void kvm_mmu_unlock_shared(struct kvm *kvm) > +{ > + WARN_ON(!kvm->arch.tdp_mmu_enabled); > + read_unlock(&kvm->arch.mmu_rwlock); > +} > + > +void kvm_mmu_lock_exclusive(struct kvm *kvm) > +{ > + WARN_ON(!kvm->arch.tdp_mmu_enabled); > + write_lock(&kvm->arch.mmu_rwlock); > +} > + > +void kvm_mmu_unlock_exclusive(struct kvm *kvm) > +{ > + WARN_ON(!kvm->arch.tdp_mmu_enabled); > + write_unlock(&kvm->arch.mmu_rwlock); > +} I'm not a fan of all of these wrappers. It's extra layers and WARNs, and introduces terminology that differs from the kernel's locking terminology, e.g. read vs. shared. The WARNs are particularly wasteful as these all have exactly one caller that explicitly checks kvm->arch.tdp_mmu_enabled. Even if we don't unconditionally use the rwlock, I think I'd prefer to omit these rwlock wrappers and instead use read/write_lock directly (and drop the WARNs). > + > +void kvm_mmu_lock(struct kvm *kvm) > +{ > + if (kvm->arch.tdp_mmu_enabled) > + kvm_mmu_lock_exclusive(kvm); > + else > + spin_lock(&kvm->arch.mmu_lock); > +} > +EXPORT_SYMBOL_GPL(kvm_mmu_lock); > + > +void kvm_mmu_unlock(struct kvm *kvm) > +{ > + if (kvm->arch.tdp_mmu_enabled) > + kvm_mmu_unlock_exclusive(kvm); > + else > + spin_unlock(&kvm->arch.mmu_lock); > +} > +EXPORT_SYMBOL_GPL(kvm_mmu_unlock); These exports aren't needed, I don't see any callers in kvm_intel or kvm_amd. That's a moot point if we use rwlock unconditionally. > +