On 17.02.20 13:04, Christian Borntraeger wrote: > > > On 17.02.20 11:56, David Hildenbrand wrote: >> [...] >>> >>> +static int kvm_s390_handle_pv(struct kvm *kvm, struct kvm_pv_cmd *cmd) >>> +{ >>> + int r = 0; >>> + void __user *argp = (void __user *)cmd->data; >>> + >>> + switch (cmd->cmd) { >>> + case KVM_PV_VM_CREATE: { >>> + r = -EINVAL; >>> + if (kvm_s390_pv_is_protected(kvm)) >>> + break; >> >> Isn't this racy? I think there has to be a way to make sure the PV state >> can't change. Is there any and I am missing something obvious? (is >> suspect we need the kvm->lock) > > > Yes, kvm->lock around kvm_s390_handle_pv is safer. > > Something like Yes. Probably a lockdep_assert_held() inside kvm_s390_pv_is_protected() would make sense to catch other races. [...] >> >> With an explicit KVM_PV_VCPU_CREATE, this does not belong here. When >> hotplugging CPUs, user space has to do that manually. But as I said >> already, this user space API could be improved. (below) > > With your proposed API this would stay. Yes [...] >> >> >> Can we please discuss why we can't >> >> - Get rid of KVM_S390_PV_COMMAND_VCPU >> - Do the allocation in KVM_PV_VM_CREATE >> - Rename KVM_PV_VM_CREATE -> KVM_PV_ENABLE >> - Rename KVM_PV_VM_DESTROY -> KVM_PV_DISABLE >> >> This user space API is unnecessary complicated and confusing. > > I will have a look if this is feasible. > Okay, thanks! -- Thanks, David / dhildenb