On Thu, Jul 19, 2018 at 05:24:20PM +0200, Andrew Jones wrote: > On Thu, Jun 21, 2018 at 03:57:40PM +0100, Dave Martin wrote: > > This patch reports the availability of KVM SVE support to userspace > > via a new vcpu feature flag KVM_ARM_VCPU_SVE. This flag is > > reported via the KVM_ARM_PREFERRED_TARGET ioctl. > > > > Userspace can enable the feature by setting the flag for > > KVM_ARM_VCPU_INIT. Without this flag set, SVE-related ioctls and > > register access extensions are hidden, and SVE remains disabled > > unconditionally for the guest. This ensures that non-SVE-aware KVM > > userspace does not receive a vcpu that it does not understand how > > to snapshot or restore correctly. > > > > Storage is allocated for the SVE register state at vcpu init time, > > sufficient for the maximum vector length to be exposed to the vcpu. > > No attempt is made to allocate the storage lazily for now. Also, > > no attempt is made to resize the storage dynamically, since the > > effective vector length of the vcpu can change at each EL0/EL1 > > transition. The storage is freed at the vcpu uninit hook. > > > > No particular attempt is made to prevent userspace from creating a > > mix of vcpus some of which have SVE enabled and some of which have > > it disabled. This may or may not be useful, but it reflects the > > underlying architectural behaviour. > > > > Signed-off-by: Dave Martin <Dave.Martin@xxxxxxx> > > --- > > arch/arm64/include/asm/kvm_host.h | 6 +++--- > > arch/arm64/include/uapi/asm/kvm.h | 1 + > > arch/arm64/kvm/guest.c | 19 +++++++++++++------ > > arch/arm64/kvm/reset.c | 14 ++++++++++++++ > > 4 files changed, 31 insertions(+), 9 deletions(-) > > > > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h > > index d2084ae..d956cf2 100644 > > --- a/arch/arm64/include/asm/kvm_host.h > > +++ b/arch/arm64/include/asm/kvm_host.h > > @@ -44,7 +44,7 @@ > > > > #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS > > > > -#define KVM_VCPU_MAX_FEATURES 4 > > +#define KVM_VCPU_MAX_FEATURES 5 > > > > #define KVM_REQ_SLEEP \ > > KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) > > @@ -439,8 +439,8 @@ static inline void kvm_arch_sync_events(struct kvm *kvm) {} > > static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {} > > static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {} > > > > -static inline int kvm_arm_arch_vcpu_init(struct kvm_vcpu *vcpu) { return 0; } > > -static inline void kvm_arm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {} > > +int kvm_arm_arch_vcpu_init(struct kvm_vcpu *vcpu); > > +void kvm_arm_arch_vcpu_uninit(struct kvm_vcpu *vcpu); > > > > void kvm_arm_init_debug(void); > > void kvm_arm_setup_debug(struct kvm_vcpu *vcpu); > > diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h > > index f54a9b0..6acf276 100644 > > --- a/arch/arm64/include/uapi/asm/kvm.h > > +++ b/arch/arm64/include/uapi/asm/kvm.h > > @@ -101,6 +101,7 @@ struct kvm_regs { > > #define KVM_ARM_VCPU_EL1_32BIT 1 /* CPU running a 32bit VM */ > > #define KVM_ARM_VCPU_PSCI_0_2 2 /* CPU uses PSCI v0.2 */ > > #define KVM_ARM_VCPU_PMU_V3 3 /* Support guest PMUv3 */ > > +#define KVM_ARM_VCPU_SVE 4 /* Allow SVE for guest */ > > > > struct kvm_vcpu_init { > > __u32 target; > > diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c > > index 5152362..fb7f6aa 100644 > > --- a/arch/arm64/kvm/guest.c > > +++ b/arch/arm64/kvm/guest.c > > @@ -58,6 +58,16 @@ int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu) > > return 0; > > } > > > > +int kvm_arm_arch_vcpu_init(struct kvm_vcpu *vcpu) > > +{ > > + return 0; > > +} > > + > > +void kvm_arm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) > > +{ > > + kfree(vcpu->arch.sve_state); > > +} > > + > > static u64 core_reg_offset_from_id(u64 id) > > { > > return id & ~(KVM_REG_ARCH_MASK | KVM_REG_SIZE_MASK | KVM_REG_ARM_CORE); > > @@ -600,12 +610,9 @@ int kvm_vcpu_preferred_target(struct kvm_vcpu_init *init) > > > > memset(init, 0, sizeof(*init)); > > > > - /* > > - * For now, we don't return any features. > > - * In future, we might use features to return target > > - * specific features available for the preferred > > - * target type. > > - */ > > + /* KVM_ARM_VCPU_SVE understood by KVM_VCPU_INIT */ > > + init->features[0] = 1 << KVM_ARM_VCPU_SVE; > > + > > init->target = (__u32)target; > > > > return 0; > > diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c > > index a74311b..f63a791 100644 > > --- a/arch/arm64/kvm/reset.c > > +++ b/arch/arm64/kvm/reset.c > > @@ -110,6 +110,20 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) > > cpu_reset = &default_regs_reset; > > } > > > > + if (system_supports_sve() && > > + test_bit(KVM_ARM_VCPU_SVE, vcpu->arch.features)) { > > + vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_SVE; > > + > > + vcpu->arch.sve_max_vl = sve_max_virtualisable_vl; > > + > > The allocation below needs to be guarded by an if (!vcpu->arch.sve_state), > otherwise every time the guest does a PSCI-off/PSCI-on cycle of the vcpu > we'll have a memory leak. Or, we need to move this allocation into the new > kvm_arm_arch_vcpu_init() function. Why did you opt for kvm_reset_vcpu()? I think I failed to find another suitable init function that gets called at the right time. I may have got confused though. Good spot on the PSCI interaction. In light of that, I agree: the SVE buffer allocation should not be done here. I'll have a think about how to refactor this in light of the discussion. Cheers ---Dave _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm