On Thu, Dec 29, 2022, Michal Luczaj wrote: > Move synchronize_srcu(&kvm->srcu) out of kvm->lock critical section. > > Signed-off-by: Michal Luczaj <mhal@xxxxxxx> > --- > arch/x86/kvm/x86.c | 12 +++++------- > 1 file changed, 5 insertions(+), 7 deletions(-) > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index da4bbd043a7b..16c89f7e98c3 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -6460,7 +6460,7 @@ static int kvm_vm_ioctl_set_msr_filter(struct kvm *kvm, > struct kvm_x86_msr_filter *new_filter, *old_filter; > bool default_allow; > bool empty = true; > - int r = 0; > + int r; Separate change that should be its own patch (even though it's trivial). > u32 i; > > if (filter->flags & ~KVM_MSR_FILTER_VALID_MASK) > @@ -6488,16 +6488,14 @@ static int kvm_vm_ioctl_set_msr_filter(struct kvm *kvm, > mutex_lock(&kvm->lock); > > /* The per-VM filter is protected by kvm->lock... */ > - old_filter = srcu_dereference_check(kvm->arch.msr_filter, &kvm->srcu, 1); > + old_filter = rcu_replace_pointer(kvm->arch.msr_filter, new_filter, 1); Same here. Can you also add a patch to replace the '1' with "mutex_is_locked(&kvm->lock)"? > + kvm_make_all_cpus_request(kvm, KVM_REQ_MSR_FILTER_CHANGED); I think the request can actually be moved outside of kvm->lock too (in yet another patch). Iterating over vCPUs without kvm->lock is safe; kvm_make_all_cpus_request() might race with adding a new vCPU, i.e. send an unnecessary request, but kvm->online_vcpus is never decremented. > - rcu_assign_pointer(kvm->arch.msr_filter, new_filter); > - synchronize_srcu(&kvm->srcu); > + mutex_unlock(&kvm->lock); > > + synchronize_srcu(&kvm->srcu); > kvm_free_msr_filter(old_filter); > > - kvm_make_all_cpus_request(kvm, KVM_REQ_MSR_FILTER_CHANGED); > - mutex_unlock(&kvm->lock); > - > return 0; > } > > -- > 2.39.0 >