Re: Race condition observed between page migration and page fault handling on arm64 machines

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Aug 01, 2024 at 03:26:57PM +0200, David Hildenbrand wrote:
> On 01.08.24 15:13, David Hildenbrand wrote:
> > > > > To dampen the tradeoff, we could do this in shmem_fault() instead? But
> > > > > then, this would mean that we do this in all
> > > > > 
> > > > > kinds of vma->vm_ops->fault, only when we discover another reference
> > > > > count race condition :) Doing this in do_fault()
> > > > > 
> > > > > should solve this once and for all. In fact, do_pte_missing() may call
> > > > > do_anonymous_page() or do_fault(), and I just
> > > > > 
> > > > > noticed that the former already checks this using vmf_pte_changed().
> > > > 
> > > > What I am still missing is why this is (a) arm64 only; and (b) if this
> > > > is something we should really worry about. There are other reasons
> > > > (e.g., speculative references) why migration could temporarily fail,
> > > > does it happen that often that it is really something we have to worry
> > > > about?
> > > 
> > > 
> > > (a) See discussion at [1]; I guess it passes on x86, which is quite
> > > strange since the race is clearly arch-independent.
> > 
> > Yes, I think this is what we have to understand. Is the race simply less
> > likely to trigger on x86?
> > 
> > I would assume that it would trigger on any arch.
> > 
> > I just ran it on a x86 VM with 2 NUMA nodes and it also seems to work here.
> > 
> > Is this maybe related to deferred flushing? Such that the other CPU will
> > by accident just observe the !pte_none a little less likely?
> > 
> > But arm64 also usually defers flushes, right? At least unless
> > ARM64_WORKAROUND_REPEAT_TLBI is around. With that we never do deferred
> > flushes.
> 
> Bingo!
> 
> diff --git a/mm/rmap.c b/mm/rmap.c
> index e51ed44f8b53..ce94b810586b 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -718,10 +718,7 @@ static void set_tlb_ubc_flush_pending(struct mm_struct
> *mm, pte_t pteval,
>   */
>  static bool should_defer_flush(struct mm_struct *mm, enum ttu_flags flags)
>  {
> -       if (!(flags & TTU_BATCH_FLUSH))
> -               return false;
> -
> -       return arch_tlbbatch_should_defer(mm);
> +       return false;
>  }
> 
> 
> On x86:
> 
> # ./migration
> TAP version 13
> 1..1
> # Starting 1 tests from 1 test cases.
> #  RUN           migration.shared_anon ...
> Didn't migrate 1 pages
> # migration.c:170:shared_anon:Expected migrate(ptr, self->n1, self->n2) (-2)
> == 0 (0)
> # shared_anon: Test terminated by assertion
> #          FAIL  migration.shared_anon
> not ok 1 migration.shared_anon
> 
> 
> It fails all of the time!

Nice work! I suppose that makes sense as, with the eager TLB
invalidation, the window between the other CPU faulting and the
migration entry being written is fairly wide.

Not sure about a fix though :/ It feels a bit overkill to add a new
invalid pte encoding just for this.

Will




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux