On Thu, Feb 06, 2025 at 02:10:56PM +0000, Mark Rutland wrote: > Now that the host eagerly saves its own FPSIMD/SVE/SME state, > non-protected KVM never needs to save the host FPSIMD/SVE/SME state, > and the code to do this is never used. Protected KVM still needs to > save/restore the host FPSIMD/SVE state to avoid leaking guest state to > the host (and to avoid revealing to the host whether the guest used > FPSIMD/SVE/SME), and that code needs to be retained. > > Remove the unused code and data structures. > > To avoid the need for a stub copy of kvm_hyp_save_fpsimd_host() in the > VHE hyp code, the nVHE/hVHE version is moved into the shared switch > header, where it is only invoked when KVM is in protected mode. > > Signed-off-by: Mark Rutland <mark.rutland@xxxxxxx> > Reviewed-by: Mark Brown <broonie@xxxxxxxxxx> > Cc: Catalin Marinas <catalin.marinas@xxxxxxx> > Cc: Fuad Tabba <tabba@xxxxxxxxxx> > Cc: Marc Zyngier <maz@xxxxxxxxxx> > Cc: Mark Brown <broonie@xxxxxxxxxx> > Cc: Oliver Upton <oliver.upton@xxxxxxxxx> > Cc: Will Deacon <will@xxxxxxxxxx> > --- > arch/arm64/include/asm/kvm_host.h | 20 +++++------------- > arch/arm64/kvm/arm.c | 8 ------- > arch/arm64/kvm/fpsimd.c | 2 -- > arch/arm64/kvm/hyp/include/hyp/switch.h | 25 ++++++++++++++++++++-- > arch/arm64/kvm/hyp/nvhe/hyp-main.c | 2 +- > arch/arm64/kvm/hyp/nvhe/switch.c | 28 ------------------------- > arch/arm64/kvm/hyp/vhe/switch.c | 8 ------- > 7 files changed, 29 insertions(+), 64 deletions(-) [...] > diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h > index f838a45665f26..c5b8a11ac4f50 100644 > --- a/arch/arm64/kvm/hyp/include/hyp/switch.h > +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h > @@ -375,7 +375,28 @@ static inline void __hyp_sve_save_host(void) > true); > } > > -static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu); > +static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu) > +{ > + /* > + * Non-protected kvm relies on the host restoring its sve state. > + * Protected kvm restores the host's sve state as not to reveal that > + * fpsimd was used by a guest nor leak upper sve bits. > + */ > + if (system_supports_sve()) { > + __hyp_sve_save_host(); > + > + /* Re-enable SVE traps if not supported for the guest vcpu. */ > + if (!vcpu_has_sve(vcpu)) > + cpacr_clear_set(CPACR_EL1_ZEN, 0); > + > + } else { > + __fpsimd_save_state(host_data_ptr(host_ctxt.fp_regs)); > + } > + > + if (kvm_has_fpmr(kern_hyp_va(vcpu->kvm))) > + *host_data_ptr(fpmr) = read_sysreg_s(SYS_FPMR); > +} > + > > /* > * We trap the first access to the FP/SIMD to save the host context and > @@ -425,7 +446,7 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code) > isb(); > > /* Write out the host state if it's in the registers */ > - if (host_owns_fp_regs()) > + if (is_protected_kvm_enabled() && host_owns_fp_regs()) > kvm_hyp_save_fpsimd_host(vcpu); I wondered briefly whether this would allow us to clean up the CPACR handling a little and avoid the conditional SVE trap re-enabling inside kvm_hyp_save_fpsimd_host() but I couldn't come up with a clean way to do it without an additional ISB. Hrm. Anyway, as far as the patch goes: Acked-by: Will Deacon <will@xxxxxxxxxx> Will