Hi, I know Will is on the case but just expressing some thoughts of my own. On Mon, Jun 17, 2019 at 11:32:54PM +0900, Takao Indoh wrote: > From: Takao Indoh <indou.takao@xxxxxxxxxxx> > > mm_cpumask was deleted by the commit 38d96287504a ("arm64: mm: kill > mm_cpumask usage") because it was not used at that time. Now this is needed > to find appropriate CPUs for TLB flush, so this patch reverts this commit. > > Signed-off-by: QI Fuli <qi.fuli@xxxxxxxxxxx> > Signed-off-by: Takao Indoh <indou.takao@xxxxxxxxxxx> > --- > arch/arm64/include/asm/mmu_context.h | 7 ++++++- > arch/arm64/kernel/smp.c | 6 ++++++ > arch/arm64/mm/context.c | 2 ++ > 3 files changed, 14 insertions(+), 1 deletion(-) > > diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h > index 2da3e478fd8f..21ef11590bcb 100644 > --- a/arch/arm64/include/asm/mmu_context.h > +++ b/arch/arm64/include/asm/mmu_context.h > @@ -241,8 +241,13 @@ static inline void > switch_mm(struct mm_struct *prev, struct mm_struct *next, > struct task_struct *tsk) > { > - if (prev != next) > + unsigned int cpu = smp_processor_id(); > + > + if (prev != next) { > __switch_mm(next); > + cpumask_clear_cpu(cpu, mm_cpumask(prev)); > + local_flush_tlb_mm(prev); > + } That's not actually a revert as we've never flushed the TLBs on the switch_mm() path. Also, this flush is not sufficient on a CnP capable CPU since another thread of the same CPU could have the prev TTBR0_EL1 value set and loading the TLB back. -- Catalin