On 13 December 2017 at 16:55, Dave Martin <Dave.Martin@xxxxxxx> wrote: > Vector length control: > > Some means is needed to determine the set of vector lengths visible > to guest software running on a vcpu. > > When a vcpu is created, the set would be defaulted to the maximal set > that can be supported while permitting each vcpu to run on any host > CPU. SVE has some virtualisation quirks which mean that this set may > exclude some vector lengths that are available for host userspace > applications. The common case should be that the sets are the same > however. > > * New ioctl KVM_ARM_VCPU_{SET,GET}_SVE_VLS to set or retrieve the set of > vector lengths available to the guest. > > Adding random vcpu ioctls > > To configure a non-default set of vector lengths, > KVM_ARM_VCPU_SET_SVE_VLS can be called: this would only be permitted > before the vcpu is first run. > > This is primarily intended for supporting migration, by providing a > robust check that the destination node will run the vcpu correctly. > In a cluster with non-uniform SVE implementation across nodes, this > also allows a specific set of VLs to be requested that the caller > knows is usable across the whole cluster. > > For migration purposes, userspace would need to do > KVM_ARM_VCPU_GET_SVE_VLS at the origin node and store the returned > set as VM metadata: on the destination node, > KVM_ARM_VCPU_SET_SVE_VLS should be used to request that exact set of > VLs: if the destination node can't support that set of VLs, the call > will fail. Can we just do this with the existing ONE_REG APIs? If you expose this via those, then QEMU doesn't need to do anything for migration at all. This is the same way we (intend to) check any optional-feature compatibility at each end, for instance features exposed in guest-visible ID registers. It's just that the "register" for the SVE vector-lengths case is one that's not visible to the guest. thanks -- PMM -- libvir-list mailing list libvir-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/libvir-list