This is a note to let you know that I've just added the patch titled ARM: 8299/1: mm: ensure local active ASID is marked as allocated on rollover to the 3.14-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: arm-8299-1-mm-ensure-local-active-asid-is-marked-as-allocated-on-rollover.patch and it can be found in the queue-3.14 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From 8e64806672466392acf19e14427d1c29df3e58b9 Mon Sep 17 00:00:00 2001 From: Will Deacon <will.deacon@xxxxxxx> Date: Thu, 29 Jan 2015 16:41:46 +0100 Subject: ARM: 8299/1: mm: ensure local active ASID is marked as allocated on rollover From: Will Deacon <will.deacon@xxxxxxx> commit 8e64806672466392acf19e14427d1c29df3e58b9 upstream. Commit e1a5848e3398 ("ARM: 7924/1: mm: don't bother with reserved ttbr0 when running with LPAE") removed the use of the reserved TTBR0 value for LPAE systems, since the ASID is held in the TTBR and can be updated atomicly with the pgd of the next mm. Unfortunately, this patch forgot to update flush_context, which deliberately avoids marking the local active ASID as allocated, since we used to switch via ASID zero and didn't need to allocate the ASID of the previous mm. The side-effect of this is that we can allocate the same ASID to the next mm and, between flushing the local TLB and updating TTBR0, we can perform speculative TLB fills for userspace nG mappings using the page table of the previous mm. The consequence of this is that the next mm can erroneously hit some mappings of the previous mm. Note that this was made significantly harder to hit by a391263cd84e ("ARM: 8203/1: mm: try to re-use old ASID assignments following a rollover") but is still theoretically possible. This patch fixes the problem by removing the code from flush_context that forces the allocated ASID to zero for the local CPU. Many thanks to the Broadcom guys for tracking this one down. Fixes: e1a5848e3398 ("ARM: 7924/1: mm: don't bother with reserved ttbr0 when running with LPAE") Reported-by: Raymond Ngun <rngun@xxxxxxxxxxxx> Tested-by: Raymond Ngun <rngun@xxxxxxxxxxxx> Reviewed-by: Gregory Fong <gregory.0xf0@xxxxxxxxx> Signed-off-by: Will Deacon <will.deacon@xxxxxxx> Signed-off-by: Russell King <rmk+kernel@xxxxxxxxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- arch/arm/mm/context.c | 26 +++++++++++--------------- 1 file changed, 11 insertions(+), 15 deletions(-) --- a/arch/arm/mm/context.c +++ b/arch/arm/mm/context.c @@ -144,21 +144,17 @@ static void flush_context(unsigned int c /* Update the list of reserved ASIDs and the ASID bitmap. */ bitmap_clear(asid_map, 0, NUM_USER_ASIDS); for_each_possible_cpu(i) { - if (i == cpu) { - asid = 0; - } else { - asid = atomic64_xchg(&per_cpu(active_asids, i), 0); - /* - * If this CPU has already been through a - * rollover, but hasn't run another task in - * the meantime, we must preserve its reserved - * ASID, as this is the only trace we have of - * the process it is still running. - */ - if (asid == 0) - asid = per_cpu(reserved_asids, i); - __set_bit(asid & ~ASID_MASK, asid_map); - } + asid = atomic64_xchg(&per_cpu(active_asids, i), 0); + /* + * If this CPU has already been through a + * rollover, but hasn't run another task in + * the meantime, we must preserve its reserved + * ASID, as this is the only trace we have of + * the process it is still running. + */ + if (asid == 0) + asid = per_cpu(reserved_asids, i); + __set_bit(asid & ~ASID_MASK, asid_map); per_cpu(reserved_asids, i) = asid; } Patches currently in stable-queue which might be from will.deacon@xxxxxxx are queue-3.14/arm-8299-1-mm-ensure-local-active-asid-is-marked-as-allocated-on-rollover.patch -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html