Hi Marc, On Mon, Oct 11, 2021 at 2:11 PM Marc Zyngier <maz@xxxxxxxxxx> wrote: > > On Sun, 10 Oct 2021 15:56:36 +0100, > Fuad Tabba <tabba@xxxxxxxxxx> wrote: > > > > Protected KVM does not support protected AArch32 guests. However, > > it is possible for the guest to force run AArch32, potentially > > causing problems. Add an extra check so that if the hypervisor > > catches the guest doing that, it can prevent the guest from > > running again by resetting vcpu->arch.target and returning > > ARM_EXCEPTION_IL. > > > > If this were to happen, The VMM can try and fix it by re- > > initializing the vcpu with KVM_ARM_VCPU_INIT, however, this is > > likely not possible for protected VMs. > > > > Adapted from commit 22f553842b14 ("KVM: arm64: Handle Asymmetric > > AArch32 systems") > > > > Signed-off-by: Fuad Tabba <tabba@xxxxxxxxxx> > > --- > > arch/arm64/kvm/hyp/nvhe/switch.c | 34 ++++++++++++++++++++++++++++++++ > > 1 file changed, 34 insertions(+) > > > > diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c > > index 2c72c31e516e..f25b6353a598 100644 > > --- a/arch/arm64/kvm/hyp/nvhe/switch.c > > +++ b/arch/arm64/kvm/hyp/nvhe/switch.c > > @@ -232,6 +232,37 @@ static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm *kvm) > > return hyp_exit_handlers; > > } > > > > +/* > > + * Some guests (e.g., protected VMs) are not be allowed to run in AArch32. > > + * The ARMv8 architecture does not give the hypervisor a mechanism to prevent a > > + * guest from dropping to AArch32 EL0 if implemented by the CPU. If the > > + * hypervisor spots a guest in such a state ensure it is handled, and don't > > + * trust the host to spot or fix it. The check below is based on the one in > > + * kvm_arch_vcpu_ioctl_run(). > > + * > > + * Returns false if the guest ran in AArch32 when it shouldn't have, and > > + * thus should exit to the host, or true if a the guest run loop can continue. > > + */ > > +static bool handle_aarch32_guest(struct kvm_vcpu *vcpu, u64 *exit_code) > > +{ > > + struct kvm *kvm = kern_hyp_va(vcpu->kvm); > > + > > + if (kvm_vm_is_protected(kvm) && vcpu_mode_is_32bit(vcpu)) { > > + /* > > + * As we have caught the guest red-handed, decide that it isn't > > + * fit for purpose anymore by making the vcpu invalid. The VMM > > + * can try and fix it by re-initializing the vcpu with > > + * KVM_ARM_VCPU_INIT, however, this is likely not possible for > > + * protected VMs. > > + */ > > + vcpu->arch.target = -1; > > + *exit_code = ARM_EXCEPTION_IL; > > Aren't we losing a potential SError here, which the original commit > doesn't need to handle? I'd expect something like: > > *exit_code &= BIT(ARM_EXIT_WITH_SERROR_BIT); > *exit_code |= ARM_EXCEPTION_IL; Yes, you're right. That would ensure the SError is preserved. Thanks, /fuad > > + return false; > > + } > > + > > + return true; > > +} > > + > > /* Switch to the guest for legacy non-VHE systems */ > > int __kvm_vcpu_run(struct kvm_vcpu *vcpu) > > { > > @@ -294,6 +325,9 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) > > /* Jump in the fire! */ > > exit_code = __guest_enter(vcpu); > > > > + if (unlikely(!handle_aarch32_guest(vcpu, &exit_code))) > > + break; > > + > > /* And we're baaack! */ > > } while (fixup_guest_exit(vcpu, &exit_code)); > > > > Thanks, > > M. > > -- > Without deviation from the norm, progress is not possible. _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm