On Fri, Sep 25, 2020 at 05:01:02PM +0100, James Morse wrote: > Commit 011e5f5bf529 ("arm64/cpufeature: Add remaining feature bits in > ID_AA64PFR0 register") proactively added published features to the > cpufeature id registers. > > If the platform supports these features, they are visible in the > sanitised ID registers that are exposed to KVM guests. This is a > problem as KVM doesn't support MPAM. > > The hardware reset behaviour of MPAM is to be disabled at EL3. It > is unlikely anyone would ship a platform without firmware support, > the necessary initialisation has been upstream in the TF-A project > for over a year. > > Firmware configures the EL2 registers to trap EL1 and EL0 access > to EL2. As KVM doesn't support MPAM, it doesn't change these > registers. Booting an MPAM capable kernel as a guest of mainline > causes KVM to take an unknown trap from an EL1 guest, and inject > an undef in response: > host: > | kvm [126]: Unsupported guest sys_reg access at: ffff800010093f24 [00000005] > | { Op0( 3), Op1( 0), CRn(10), CRm( 5), Op2( 0), func_read }, > > guest: > | ------------[ cut here ]------------ > | kernel BUG at arch/arm64/kernel/traps.c:409! > | Internal error: Oops - BUG: 0 [#1] PREEMPT SMP > | Modules linked in: > | CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.6.0-rc1-00152-g570fa7e2d2ad #11605 > | Hardware name: linux,dummy-virt (DT) > | pstate: 00000005 (nzcv daif -PAN -UAO) > | pc : do_undefinstr+0x2ec/0x310 > | lr : do_undefinstr+0x2f8/0x310 > ... > > This is a tad unfair on the guest as KVM said it supported the > feature. Mask out the MPAM feature. > > Fixes: 011e5f5bf529 ("arm64/cpufeature: Add remaining feature bits in > ID_AA64PFR0 register") > Cc: <stable@xxxxxxxxxxxxxxx> > Cc: Anshuman Khandual <anshuman.khandual@xxxxxxx> > Signed-off-by: James Morse <james.morse@xxxxxxx> > > --- > I'll be back at rc1 with the minimal KVM support to ensure the traps > are enabled and handled islently. > --- > arch/arm64/kvm/sys_regs.c | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c > index 077293b5115f..f736791f37ca 100644 > --- a/arch/arm64/kvm/sys_regs.c > +++ b/arch/arm64/kvm/sys_regs.c > @@ -1131,6 +1131,7 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu, > if (!vcpu_has_sve(vcpu)) > val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT); > val &= ~(0xfUL << ID_AA64PFR0_AMU_SHIFT); > + val &= ~(0xfUL << ID_AA64PFR0_MPAM_SHIFT); > } else if (id == SYS_ID_AA64ISAR1_EL1 && !vcpu_has_ptrauth(vcpu)) { > val &= ~((0xfUL << ID_AA64ISAR1_APA_SHIFT) | > (0xfUL << ID_AA64ISAR1_API_SHIFT) | > -- > 2.28.0 > Hi James, Thanks for this fix Reviewed-by: Andrew Jones <drjones@xxxxxxxxxx> but, going forward, I think we need a more robust solution to CPU feature additions in order to avoid these types of issues. Our current approach is to patch KVM to hide features from the guest as we introduce support to the [guest] kernel. IOW, we have to remember to maintain a guest CPU feature reject-list. And, since that's error-prone, we should do regular audits of the reject-list to ensure it's complete. It would be better to have an accept-list (all features masked by default) and then only expose features as we add the KVM support. Maybe we should introduce KVM masks for each ID register? Also, regarding the current implementation, do you know if a recent audit has been conducted to ensure (now with MPAM) that the current feature hiding is complete? Thanks, drew _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm