On Wed, Aug 28, 2019 at 02:38:19PM +0100, Alexandru Elisei wrote: > When a guest tries to execute code from MMIO memory, KVM injects an > external abort into that guest. We have now fixed the psci test to not > fetch instructions from the I/O region, and it's not that often that a > guest misbehaves in such a way. Let's expand our coverage by adding a > proper test targetting this corner case. > > Signed-off-by: Alexandru Elisei <alexandru.elisei@xxxxxxx> > --- > The fault injection path is broken for nested guests [1]. You can use the > last patch from the thread [2] to successfully run the test at EL2. > > [1] https://www.spinics.net/lists/arm-kernel/msg745391.html > [2] https://www.spinics.net/lists/arm-kernel/msg750310.html > > lib/arm64/asm/esr.h | 3 ++ > arm/selftest.c | 96 +++++++++++++++++++++++++++++++++++++++++++++++++++-- > 2 files changed, 96 insertions(+), 3 deletions(-) > > diff --git a/lib/arm64/asm/esr.h b/lib/arm64/asm/esr.h > index 8e5af4d90767..8c351631b0a0 100644 > --- a/lib/arm64/asm/esr.h > +++ b/lib/arm64/asm/esr.h > @@ -44,4 +44,7 @@ > #define ESR_EL1_EC_BKPT32 (0x38) > #define ESR_EL1_EC_BRK64 (0x3C) > > +#define ESR_EL1_FSC_MASK (0x3F) > +#define ESR_EL1_FSC_EXTABT (0x10) > + > #endif /* _ASMARM64_ESR_H_ */ > diff --git a/arm/selftest.c b/arm/selftest.c > index 176231f32ee1..18cc0ad8f729 100644 > --- a/arm/selftest.c > +++ b/arm/selftest.c > @@ -16,6 +16,8 @@ > #include <asm/psci.h> > #include <asm/smp.h> > #include <asm/barrier.h> > +#include <asm/mmu.h> > +#include <asm/pgtable.h> > > static void __user_psci_system_off(void) > { > @@ -60,9 +62,38 @@ static void check_setup(int argc, char **argv) > report_abort("missing input"); > } > > +extern pgd_t *mmu_idmap; > +static void prep_io_exec(void) > +{ > + pgd_t *pgd = pgd_offset(mmu_idmap, 0); > + unsigned long sctlr; > + > + /* > + * AArch64 treats all regions writable at EL0 as PXN. I didn't think that was the case, and I can't find wording to that effect in the ARM ARM (looking at ARM DDI 0487E.a). Where is that stated? > Clear the user bit > + * so we can execute code from the bottom I/O space (0G-1G) to simulate > + * a misbehaved guest. > + */ > + pgd_val(*pgd) &= ~PMD_SECT_USER; > + flush_dcache_addr((unsigned long)pgd); The virtualization extensions imply coherent page table walks, so I don't think the cache maintenance is necessary (provided TCR_EL1.{SH*,ORGN*,IRGN*} are configured appropriately. > + flush_tlb_page(0); > + > + /* Make sure we can actually execute from a writable region */ > +#ifdef __arm__ > + asm volatile("mrc p15, 0, %0, c1, c0, 0": "=r" (sctlr)); > + sctlr &= ~CR_ST; > + asm volatile("mcr p15, 0, %0, c1, c0, 0" :: "r" (sctlr)); > +#else > + sctlr = read_sysreg(sctlr_el1); > + sctlr &= ~SCTLR_EL1_WXN; > + write_sysreg(sctlr, sctlr_el1); > +#endif > + isb(); > +} Thanks, Mark.