On a system without uniform support for AArch32 at EL0, it is possible for the guest to force run AArch32 at EL0 and potentially cause an illegal exception if running on the wrong core. Add an extra check to catch if the guest ever does that and prevent it from running again by resetting vcpu->arch.target and return ARM_EXCEPTION_IL. We try to catch this misbehavior as early as possible and not rely on PSTATE.IL to occur. Tested on Juno by instrumenting the host to: * Fake asym aarch32. * Instrument KVM to make the asymmetry visible to the guest. Any attempt to run 32bit app in the guest will produce such error on qemu: # ./test error: kvm run failed Invalid argument PC=ffff800010945080 X00=ffff800016a45014 X01=ffff800010945058 X02=ffff800016917190 X03=0000000000000000 X04=0000000000000000 X05=00000000fffffffb X06=0000000000000000 X07=ffff80001000bab0 X08=0000000000000000 X09=0000000092ec5193 X10=0000000000000000 X11=ffff80001608ff40 X12=ffff000075fcde86 X13=ffff000075fcde88 X14=ffffffffffffffff X15=ffff00007b2105a8 X16=ffff00007b006d50 X17=0000000000000000 X18=0000000000000000 X19=ffff00007a82b000 X20=0000000000000000 X21=ffff800015ccd158 X22=ffff00007a82b040 X23=ffff00007a82b008 X24=0000000000000000 X25=ffff800015d169b0 X26=ffff8000126d05bc X27=0000000000000000 X28=0000000000000000 X29=ffff80001000ba90 X30=ffff80001093f3dc SP=ffff80001000ba90 PSTATE=60000005 -ZC- EL1h qemu-system-aarch64: Failed to get KVM_REG_ARM_TIMER_CNT Aborted Signed-off-by: Qais Yousef <qais.yousef@xxxxxxx> --- arch/arm64/kvm/arm.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index b588c3b5c2f0..c2fa57f56a94 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -804,6 +804,19 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) preempt_enable(); + /* + * The ARMv8 architecture doesn't give the hypervisor + * a mechanism to prevent a guest from dropping to AArch32 EL0 + * if implemented by the CPU. If we spot the guest in such + * state and that we decided it wasn't supposed to do so (like + * with the asymmetric AArch32 case), return to userspace with + * a fatal error. + */ + if (!system_supports_32bit_el0() && vcpu_mode_is_32bit(vcpu)) { + vcpu->arch.target = -1; + ret = ARM_EXCEPTION_IL; + } + ret = handle_exit(vcpu, ret); } -- 2.25.1