On Wed, Aug 24, 2016 at 06:02:08PM +0100, Mark Rutland wrote: > When CONFIG_RANDOMIZE_BASE is selected, we modify the page tables to remap the > kernel at a newly-chosen VA range. We do this with the MMU disabled, but do not > invalidate TLBs prior to re-enabling the MMU with the new tables. Thus the old > mappings entries may still live in TLBs, and we risk violating > Break-Before-Make requirements, leading to TLB conflicts and/or other issues. > > We invalidate TLBs when we uninsall the idmap in early setup code, but prior to > this we are subject to issues relating to the Break-Before-Make violation. > > Avoid these issues by invalidating the TLBs before the new mappings can be > used by the hardware. > > Fixes: f80fb3a3d50843a4 ("arm64: add support for kernel ASLR") > Signed-off-by: Mark Rutland <mark.rutland@xxxxxxx> > Cc: Ard Biesheuvel <ard.biesheuvel@xxxxxxxxxx> > Cc: Catalin Marinas <catalin.marinas@xxxxxxx> > Cc: Will Deacon <will.deacon@xxxxxxx> > Cc: stable@xxxxxxxxxxxxxxx > --- > arch/arm64/kernel/head.S | 3 +++ > 1 file changed, 3 insertions(+) Acked-by: Will Deacon <will.deacon@xxxxxxx> Although I do wonder whether it would be cleaner to do the local TLBI in __create_page_tables after zeroing swapper, and then moving the TLBI out of __cpu_setup and onto the secondary boot path. I suppose it doesn't really matter... Will > diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S > index b77f583..3e7b050 100644 > --- a/arch/arm64/kernel/head.S > +++ b/arch/arm64/kernel/head.S > @@ -757,6 +757,9 @@ ENTRY(__enable_mmu) > isb > bl __create_page_tables // recreate kernel mapping > > + tlbi vmalle1 // Remove any stale TLB entries > + dsb nsh > + > msr sctlr_el1, x19 // re-enable the MMU > isb > ic iallu // flush instructions fetched > -- > 2.7.4 > -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html