Hi Oliver, On Mon, Jul 19, 2021 at 8:43 PM Oliver Upton <oupton@xxxxxxxxxx> wrote: > > On Mon, Jul 19, 2021 at 9:04 AM Fuad Tabba <tabba@xxxxxxxxxx> wrote: > > > > Protected KVM does not support protected AArch32 guests. However, > > it is possible for the guest to force run AArch32, potentially > > causing problems. Add an extra check so that if the hypervisor > > catches the guest doing that, it can prevent the guest from > > running again by resetting vcpu->arch.target and returning > > ARM_EXCEPTION_IL. > > > > Adapted from commit 22f553842b14 ("KVM: arm64: Handle Asymmetric > > AArch32 systems") > > > > Signed-off-by: Fuad Tabba <tabba@xxxxxxxxxx> > > Would it make sense to document how we handle misbehaved guests, in > case a particular VMM wants to clean up the mess afterwards? I agree, especially since with this patch this could happen in more than one place. Thanks, /fuad > -- > Thanks, > Oliver > > > --- > > arch/arm64/kvm/hyp/include/hyp/switch.h | 24 ++++++++++++++++++++++++ > > 1 file changed, 24 insertions(+) > > > > diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h > > index 8431f1514280..f09343e15a80 100644 > > --- a/arch/arm64/kvm/hyp/include/hyp/switch.h > > +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h > > @@ -23,6 +23,7 @@ > > #include <asm/kprobes.h> > > #include <asm/kvm_asm.h> > > #include <asm/kvm_emulate.h> > > +#include <asm/kvm_fixed_config.h> > > #include <asm/kvm_hyp.h> > > #include <asm/kvm_mmu.h> > > #include <asm/fpsimd.h> > > @@ -477,6 +478,29 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) > > write_sysreg_el2(read_sysreg_el2(SYS_ELR) - 4, SYS_ELR); > > } > > > > + /* > > + * Protected VMs might not be allowed to run in AArch32. The check below > > + * is based on the one in kvm_arch_vcpu_ioctl_run(). > > + * The ARMv8 architecture doesn't give the hypervisor a mechanism to > > + * prevent a guest from dropping to AArch32 EL0 if implemented by the > > + * CPU. If the hypervisor spots a guest in such a state ensure it is > > + * handled, and don't trust the host to spot or fix it. > > + */ > > + if (unlikely(is_nvhe_hyp_code() && > > + kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)) && > > + FIELD_GET(FEATURE(ID_AA64PFR0_EL0), > > + PVM_ID_AA64PFR0_ALLOW) < > > + ID_AA64PFR0_ELx_32BIT_64BIT && > > + vcpu_mode_is_32bit(vcpu))) { > > + /* > > + * As we have caught the guest red-handed, decide that it isn't > > + * fit for purpose anymore by making the vcpu invalid. > > + */ > > + vcpu->arch.target = -1; > > + *exit_code = ARM_EXCEPTION_IL; > > + goto exit; > > + } > > + > > /* > > * We're using the raw exception code in order to only process > > * the trap if no SError is pending. We will come back to the > > -- > > 2.32.0.402.g57bb445576-goog > > > > _______________________________________________ > > kvmarm mailing list > > kvmarm@xxxxxxxxxxxxxxxxxxxxx > > https://lists.cs.columbia.edu/mailman/listinfo/kvmarm _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm