On 30/07/2015 13:40, Christian Borntraeger wrote: >> > + /* Pairs with smp_wmb() in kvm_vm_ioctl_create_vcpu, in case >> > + * the caller has read kvm->online_vcpus before (as is the case >> > + * for kvm_for_each_vcpu, for example). >> > + */ >> > smp_rmb(); > Hmmm, wouldnt something like smp_mb__after_atomic > >> > return kvm->vcpus[i]; >> > } >> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c >> > index 8dc4828f623f..093b3d10b411 100644 >> > --- a/virt/kvm/kvm_main.c >> > +++ b/virt/kvm/kvm_main.c >> > @@ -2206,6 +2206,8 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, u32 id) >> > } >> > >> > kvm->vcpus[atomic_read(&kvm->online_vcpus)] = vcpu; > and smp_mb__before_atomic > > be the better function? > > >> > + >> > + /* Pairs with smp_rmb() in kvm_get_vcpu. */ >> > smp_wmb(); >> > atomic_inc(&kvm->online_vcpus); The difference between smp_rmb/smp_wmb on one side, and smp_mb on the other side, is that smp_mb protects previous stores from subsequent loads. smp_mb__before_atomic would be overkill for atomic_inc, where there's no race involving the read of online_vcpus (there's only one writer and it's protected by kvm->lock). We only care about ordering the kvm->vcpus store with the kvm->online_vcpus store. One could use a hypothetical smp_wmb__before_atomic(), but this is not a fast path and IIRC only on ia64 it would be different from smp_wmb(). The "after_atomic" bit of smp_mb__after_atomic doesn't include atomic_read/atomic_set, so it would not be good for kvm_get_vcpu. But there is also no previous store to order against, because kvm->online_vcpus is only written in the KVM_CREATE_VCPU ioctl. Thus smp_rmb() is enough. Paolo -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html