On Fri, Jan 24, 2025, Sean Christopherson wrote: > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index a45ae60e84ab..74c20dbb92da 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -7120,6 +7120,19 @@ static void mmu_destroy_caches(void) > kmem_cache_destroy(mmu_page_header_cache); > } > > +static void kvm_wake_nx_recovery_thread(struct kvm *kvm) > +{ > + /* > + * The NX recovery thread is spawned on-demand at the first KVM_RUN and > + * may not be valid even though the VM is globally visible. Do nothing, > + * as such a VM can't have any possible NX huge pages. > + */ > + struct vhost_task *nx_thread = READ_ONCE(kvm->arch.nx_huge_page_recovery_thread); > + > + if (nx_thread) > + vhost_task_wake(nx_thread); As mentioned in the original thread[*], I belatedly realized there's a race with this approach. If vhost_task_start() completes and kvm_nx_huge_page_recovery_worker() runs before a parameter change, but the parameter change runs before the WRITE_ONCE(), then the worker will run with stale params and could end up sleeping for far longer than userspace wants. I assume we could address that by taking kvm->arch.nx_once.mutex in this helper instead of using the lockless approach. I don't think that would lead to any deadlocks? [*] https://lore.kernel.org/all/Z5QsBXJ7rkJFDtmK@xxxxxxxxxx