This is a note to let you know that I've just added the patch titled sparc64: add per-cpu mm of secondary contexts to the 4.4-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: sparc64-add-per-cpu-mm-of-secondary-contexts.patch and it can be found in the queue-4.4 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From foo@baz Thu Jun 8 09:20:28 CEST 2017 From: Pavel Tatashin <pasha.tatashin@xxxxxxxxxx> Date: Wed, 31 May 2017 11:25:23 -0400 Subject: sparc64: add per-cpu mm of secondary contexts From: Pavel Tatashin <pasha.tatashin@xxxxxxxxxx> [ Upstream commit 7a5b4bbf49fe86ce77488a70c5dccfe2d50d7a2d ] The new wrap is going to use information from this array to figure out mm's that currently have valid secondary contexts setup. Signed-off-by: Pavel Tatashin <pasha.tatashin@xxxxxxxxxx> Reviewed-by: Bob Picco <bob.picco@xxxxxxxxxx> Reviewed-by: Steven Sistare <steven.sistare@xxxxxxxxxx> Signed-off-by: David S. Miller <davem@xxxxxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- arch/sparc/include/asm/mmu_context_64.h | 5 +++-- arch/sparc/mm/init_64.c | 1 + 2 files changed, 4 insertions(+), 2 deletions(-) --- a/arch/sparc/include/asm/mmu_context_64.h +++ b/arch/sparc/include/asm/mmu_context_64.h @@ -17,6 +17,7 @@ extern spinlock_t ctx_alloc_lock; extern unsigned long tlb_context_cache; extern unsigned long mmu_context_bmap[]; +DECLARE_PER_CPU(struct mm_struct *, per_cpu_secondary_mm); void get_new_mmu_context(struct mm_struct *mm); #ifdef CONFIG_SMP void smp_new_mmu_context_version(void); @@ -74,8 +75,9 @@ void __flush_tlb_mm(unsigned long, unsig static inline void switch_mm(struct mm_struct *old_mm, struct mm_struct *mm, struct task_struct *tsk) { unsigned long ctx_valid, flags; - int cpu; + int cpu = smp_processor_id(); + per_cpu(per_cpu_secondary_mm, cpu) = mm; if (unlikely(mm == &init_mm)) return; @@ -121,7 +123,6 @@ static inline void switch_mm(struct mm_s * for the first time, we must flush that context out of the * local TLB. */ - cpu = smp_processor_id(); if (!ctx_valid || !cpumask_test_cpu(cpu, mm_cpumask(mm))) { cpumask_set_cpu(cpu, mm_cpumask(mm)); __flush_tlb_mm(CTX_HWBITS(mm->context), --- a/arch/sparc/mm/init_64.c +++ b/arch/sparc/mm/init_64.c @@ -660,6 +660,7 @@ unsigned long tlb_context_cache = CTX_FI #define MAX_CTX_NR (1UL << CTX_NR_BITS) #define CTX_BMAP_SLOTS BITS_TO_LONGS(MAX_CTX_NR) DECLARE_BITMAP(mmu_context_bmap, MAX_CTX_NR); +DEFINE_PER_CPU(struct mm_struct *, per_cpu_secondary_mm) = {0}; /* Caller does TLB context flushing on local CPU if necessary. * The caller also ensures that CTX_VALID(mm->context) is false. Patches currently in stable-queue which might be from pasha.tatashin@xxxxxxxxxx are queue-4.4/sparc64-new-context-wrap.patch queue-4.4/sparc64-combine-activate_mm-and-switch_mm.patch queue-4.4/sparc64-add-per-cpu-mm-of-secondary-contexts.patch queue-4.4/sparc64-reset-mm-cpumask-after-wrap.patch queue-4.4/sparc64-redefine-first-version.patch queue-4.4/sparc64-delete-old-wrap-code.patch