On Wed, Feb 07, 2018 at 03:34:03PM +0000, Dave Martin wrote: > On Wed, Feb 07, 2018 at 03:58:31PM +0100, Christoffer Dall wrote: > > On Wed, Feb 07, 2018 at 11:33:28AM +0000, Dave Martin wrote: > > [...] > > > > What if KVM_ARM_SVE_SET_VLS() were to yield 0 if the exact requested set > > > of VLs was configured, -ERANGE if some subset was configured successfully > > > (but not exactly the set requested), and the usual -EINVAL/-EIO/whatever > > > if the set of VLs couldn't be configured at all? > > > > Sounds good to me. > > > > > > > > Then the probe would go like this: > > > > > > __u64 vqs[SVE_VQ_MAX / 64] = { [0 ... SVE_VQ_MAX / 64 - 1] = ~(u64)0 }; > > > if (ioctl(vcpu_fd, KVM_ARM_SVE_SET_VLS, vqs) && errno != ERANGE)) > > > goto error; > > > > > > ioctl(vcpu_fd, KVM_ARM_SVE_GET_VLS, vqs); > > > > > > /* ... */ > > > > > > Another option would be for SVE_ARM_SVE_SET_VLS to write the resulting > > > set back to its argument, possibly with the same 0/ERANGE/EINVAL semantics. > > > > > > > > > Alternatively, userspace would be require to do a KVM_ARM_SVE_GET_VLS, > > > and check the resulting set: > > > > > > /* ... */ > > > > > > __u64 newvqs[SVE_VQ_MAX / 64]; > > > ioctl(vcpu_fd, KVM_ARM_SVE_GET_VLS, newvqs); > > > > > > if (memcmp(vqs, newvqs, sizeof vqs)) > > > goto mismatch; > > > > > > vcpu restore would need to treat any mismatch as an error: > > > the exact requested set but be configurable, or the VM may go wrong. > > > > I'm not sure I can parse this sentence or extract the meaning? > > That was lazy language on my part. I'll try to explain it better: > > When the saved state of a migrating vcpu is being loaded into a > newly-created vcpu on the target node, userspace needs a way to > ensure that the set of VLs that vcpu will see when it runs is > _exactly_ the same set it could see before migration. Yes, and that should work fine with the -ERANGE proposal, right? > > I called this out separately because it's different from the > case of creating a brand-new VM: in the latter case, we can't the > kernel to provide the best set of VLs possible, but it is not an > error not to get every VL we asked for. > This was another hard one:) I think what you're saying is that it's technically not an error when setting the VLs on a new VM, but it would be in the case of migration, and therefore we need to tell userspace in both cases what happened, and it can decide. > > > Any opinion on which approach is best? > > > > I think I prefer letting KVM_ARM_SVE_SET_VLS change the supplied vqs, > > since having it be two separate ioctls always potentially leaves room > > for some other thread having modified the set in the meantime (or making > > a programmer doubt if this can be the case) where a single ioctl() will > > look atomic. > > > > The user can always call KVM_ARM_SVE_GET_VLS afterwards and should get > > the same result. > > OK, I'll go with changing the supplied vqs for KVM_ARM_SVE_SET_VLS, > but I'll retain the -ERANGE semantics (even if technically redundant) > since that's harder to forget to check. Agree, that should definitely be part of it. Thanks, -Christoffer -- libvir-list mailing list libvir-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/libvir-list