Hi Russell, On Sat, Mar 09, 2024 at 09:57:04AM +0000, Russell King (Oracle) wrote: > On Sat, Mar 09, 2024 at 08:45:35AM +0100, Stefan Wiehler wrote: > > diff --git a/arch/arm/mm/context.c b/arch/arm/mm/context.c > > index 4204ffa2d104..4fc2c559f1b6 100644 > > --- a/arch/arm/mm/context.c > > +++ b/arch/arm/mm/context.c > > @@ -254,7 +254,8 @@ void check_and_switch_context(struct mm_struct *mm, struct task_struct *tsk) > > && atomic64_xchg(&per_cpu(active_asids, cpu), asid)) > > goto switch_mm_fastpath; > > > > - raw_spin_lock_irqsave(&cpu_asid_lock, flags); > > + local_irq_save(flags); > > + arch_spin_lock(&cpu_asid_lock.raw_lock); > > /* Check that our ASID belongs to the current generation. */ > > asid = atomic64_read(&mm->context.id); > > if ((asid ^ atomic64_read(&asid_generation)) >> ASID_BITS) { > > @@ -269,7 +270,8 @@ void check_and_switch_context(struct mm_struct *mm, struct task_struct *tsk) > > > > atomic64_set(&per_cpu(active_asids, cpu), asid); > > cpumask_set_cpu(cpu, mm_cpumask(mm)); > > - raw_spin_unlock_irqrestore(&cpu_asid_lock, flags); > > + arch_spin_unlock(&cpu_asid_lock.raw_lock); > > + local_irq_restore(flags); > > > > switch_mm_fastpath: > > cpu_switch_mm(mm->pgd, mm); > > > > @Russell, what do you think? > > I think this is Will Deacon's code, so we ought to hear from Will... Thanks for adding me in. Using arch_spin_lock() really feels like a bodge to me. This code isn't run only on the hot-unplug path, but rather this is part of switch_mm() and we really should be able to have lockdep work properly there for the usual case. Now, do we actually need to worry about the ASID when switching to the init_mm? I'd have thought that would be confined to global (kernel) mappings, so I wonder whether we could avoid this slow path code altogether like we do on arm64 in __switch_mm(). But I must confess that I don't recall the details of the pre-LPAE MMU configuration... Will