Re: [PATCH 09/12] x86/mm: enable broadcast TLB invalidation for multi-threaded processes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Jan 4, 2025 at 3:55 AM Rik van Riel <riel@xxxxxxxxxxx> wrote:
> On Fri, 2025-01-03 at 18:36 +0100, Jann Horn wrote:
> > Maybe change how mm->context.asid_transition works such that it is
> > immediately set on mm creation and cleared when the transition is
> > done, so that you don't have to touch it here?
> >
> If we want to document the ordering, won't it be better
> to keep both assignments close to each other (with WRITE_ONCE),
> so the code stays easier to understand for future maintenance?

You have a point there. I was thinking that if asid_transition is set
on mm creation, we don't have to think about the ordering properties
as hard; but I guess you're right that it would be more
clean/future-proof to do the writes together here.

> > > +               return;
> > > +
> > > +       for_each_cpu(cpu, mm_cpumask(mm)) {
> > > +               if (READ_ONCE(per_cpu(cpu_tlbstate.loaded_mm, cpu))
> > > != mm)
> > > +                       continue;
> >
> > switch_mm_irqs_off() picks an ASID and writes CR3 before writing
> > loaded_mm:
> > "/* Make sure we write CR3 before loaded_mm. */"
> >
> > Can we race with a concurrent switch_mm_irqs_off() on the other CPU
> > such that the other CPU has already switched CR3 to our MM using the
> > old ASID, but has not yet written loaded_mm, such that we skip it
> > here? And then we'll think we finished the ASID transition, and the
> > next time we do a flush, we'll wrongly omit the flush for that other
> > CPU even though it's still using the old ASID?
>
> That is a very good question.
>
> I suppose we need to check against LOADED_MM_SWITCHING
> too, and possibly wait to see what mm shows up on that
> CPU before proceeding?
>
> Maybe as simple as this?
>
>         for_each_cpu(cpu, mm_cpumask(mm)) {
>                 while (READ_ONCE(per_cpu(cpu_tlbstate.loaded_mm, cpu)
> == LOADED_MM_SWITCHING)
>                         cpu_relax();
>
>                 if (READ_ONCE(per_cpu(cpu_tlbstate.loaded_mm, cpu)) !=
> mm)
>                         continue;
>
>                 /*
>                  * If at least one CPU is not using the broadcast ASID
> yet,
>                  * send a TLB flush IPI. The IPI should cause
> stragglers
>                  * to transition soon.
>                  */
>                 if (per_cpu(cpu_tlbstate.loaded_mm_asid, cpu) !=
> bc_asid) {
>                         flush_tlb_multi(mm_cpumask(info->mm), info);
>                         return;
>                 }
>         }
>
> Then the only change needed to switch_mm_irqs_off
> would be to move the LOADED_MM_SWITCHING line to
> before choose_new_asid, to fully close the window.
>
> Am I overlooking anything here?

I think that might require having a full memory barrier in
switch_mm_irqs_off to ensure that the write of LOADED_MM_SWITCHING
can't be reordered after reads in choose_new_asid(). Which wouldn't be
very nice; we probably should avoid adding heavy barriers to the task
switch path...

Hmm, but I think luckily the cpumask_set_cpu() already implies a
relaxed RMW atomic, which I think on X86 is actually the same as a
sequentially consistent atomic, so as long as you put the
LOADED_MM_SWITCHING line before that, it might do the job? Maybe with
an smp_mb__after_atomic() and/or an explainer comment.
(smp_mb__after_atomic() is a no-op on x86, so maybe just a comment is
the right way. Documentation/memory-barriers.txt says
smp_mb__after_atomic() can be used together with atomic RMW bitop
functions.)

> > > +
> > > +               /*
> > > +                * If at least one CPU is not using the broadcast
> > > ASID yet,
> > > +                * send a TLB flush IPI. The IPI should cause
> > > stragglers
> > > +                * to transition soon.
> > > +                */
> > > +               if (per_cpu(cpu_tlbstate.loaded_mm_asid, cpu) !=
> > > bc_asid) {
> >
> > READ_ONCE()? Also, I think this needs a comment explaining that this
> > can race with concurrent MM switches such that we wrongly think that
> > there's a straggler (because we're not reading the loaded_mm and the
> > loaded_mm_asid as one atomic combination).
>
> I'll add the READ_ONCE.
>
> Will the race still exist if we wait on
> LOADED_MM_SWITCHING as proposed above?

I think so, since between reading the loaded_mm and reading the
loaded_mm_asid, the remote CPU might go through an entire task switch.
Like:

1. We read the loaded_mm, and see that the remote CPU is currently
running in our mm_struct.
2. The remote CPU does a task switch to another process with a
different mm_struct.
3. We read the loaded_mm_asid, and see an ASID that does not match our
broadcast ASID (because the loaded ASID is not for our mm_struct).





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux