Re: [PATCH v2 1/2] fork: lock VMAs of the parent process when forking

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jul 5, 2023 at 1:08 AM David Hildenbrand <david@xxxxxxxxxx> wrote:
>
> On 05.07.23 08:37, Suren Baghdasaryan wrote:
> > When forking a child process, parent write-protects an anonymous page
> > and COW-shares it with the child being forked using copy_present_pte().
> > Parent's TLB is flushed right before we drop the parent's mmap_lock in
> > dup_mmap(). If we get a write-fault before that TLB flush in the parent,
> > and we end up replacing that anonymous page in the parent process in
> > do_wp_page() (because, COW-shared with the child), this might lead to
> > some stale writable TLB entries targeting the wrong (old) page.
> > Similar issue happened in the past with userfaultfd (see flush_tlb_page()
> > call inside do_wp_page()).
> > Lock VMAs of the parent process when forking a child, which prevents
> > concurrent page faults during fork operation and avoids this issue.
> > This fix can potentially regress some fork-heavy workloads. Kernel build
> > time did not show noticeable regression on a 56-core machine while a
> > stress test mapping 10000 VMAs and forking 5000 times in a tight loop
> > shows ~5% regression. If such fork time regression is unacceptable,
> > disabling CONFIG_PER_VMA_LOCK should restore its performance. Further
> > optimizations are possible if this regression proves to be problematic.
>
> Out of interest, did you also populate page tables / pages for some of these
> VMAs, or is this primarily looping over 10000 VMAs that don't actually copy any
> page tables?

I did not populate the page tables, therefore this represents the
worst case scenario (the share of time used to lock the VMAs is
maximized).

>
> >
> > Suggested-by: David Hildenbrand <david@xxxxxxxxxx>
> > Reported-by: Jiri Slaby <jirislaby@xxxxxxxxxx>
> > Closes: https://lore.kernel.org/all/dbdef34c-3a07-5951-e1ae-e9c6e3cdf51b@xxxxxxxxxx/
> > Reported-by: Holger Hoffstätte <holger@xxxxxxxxxxxxxxxxxxxxxx>
> > Closes: https://lore.kernel.org/all/b198d649-f4bf-b971-31d0-e8433ec2a34c@xxxxxxxxxxxxxxxxxxxxxx/
> > Reported-by: Jacob Young <jacobly.alt@xxxxxxxxx>
> > Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217624
> > Fixes: 0bff0aaea03e ("x86/mm: try VMA lock-based page fault handling first")
> > Cc: stable@xxxxxxxxxxxxxxx
> > Signed-off-by: Suren Baghdasaryan <surenb@xxxxxxxxxx>
> > ---
> >   kernel/fork.c | 1 +
> >   1 file changed, 1 insertion(+)
> >
> > diff --git a/kernel/fork.c b/kernel/fork.c
> > index b85814e614a5..d2e12b6d2b18 100644
> > --- a/kernel/fork.c
> > +++ b/kernel/fork.c
> > @@ -686,6 +686,7 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
> >       for_each_vma(old_vmi, mpnt) {
> >               struct file *file;
> >
> > +             vma_start_write(mpnt);
> >               if (mpnt->vm_flags & VM_DONTCOPY) {
> >                       vm_stat_account(mm, mpnt->vm_flags, -vma_pages(mpnt));
> >                       continue;
>
> After the mmap_write_lock_killable(), there will still be a period where page
> faults can happen. Essentially, page faults can happen for a VMA until we lock that VMA.
>
> I cannot immediately name something that is broken allowing for that, and this change
> should fix the issue at hand, but exotic things like
>
>         flush_cache_dup_mm(oldmm);
>
> make me wonder if we really want to allow for that or if there is some other corner case
> in fork() handling that really doesn't expect concurrent page faults (and, thereby, page
> table modifications) with fork.
>
> For example, documentation/core-api/cachetlb.rst says
>
> 2) ``void flush_cache_dup_mm(struct mm_struct *mm)``
>
>         This interface flushes an entire user address space from
>         the caches.  That is, after running, there will be no cache
>         lines associated with 'mm'.
>
>         This interface is used to handle whole address space
>         page table operations such as what happens during fork.
>
>         This option is separate from flush_cache_mm to allow some
>         optimizations for VIPT caches.
>

I see. So, we really need to lock all VMAs before
flush_cache_dup_mm(). Makes sense. I'll post an update to this patch
shortly.
Thanks,
Suren.

>
> An alternative that requires another VMA walk would be
>
> diff --git a/kernel/fork.c b/kernel/fork.c
> index 41c964104b58..0f182d3f049b 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -662,6 +662,13 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
>                 retval = -EINTR;
>                 goto fail_uprobe_end;
>         }
> +
> +       /* Disallow any page faults early by locking all VMAs. */
> +       if (IS_ENABLED(CONFIG_PER_VMA_LOCK)) {
> +               for_each_vma(old_vmi, mpnt)
> +                       vma_start_write(mpnt);
> +               vma_iter_init(old_vmi, old_mm, 0);
> +       }
>         flush_cache_dup_mm(oldmm);
>         uprobe_dup_mmap(oldmm, mm);
>         /*
> --
> 2.41.0
>
>
> Unless there are other thoughts, I guess you change is fine regarding the problem
> at hand. Not so sure regarding any other corner cases, that's why I'm spelling it out.
>
>
> Acked-by: David Hildenbrand <david@xxxxxxxxxx>
>
> --
> Cheers,
>
> David / dhildenb
>





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux