Re: [PATCH v3 1/2] fork: lock VMAs of the parent process when forking

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



* Suren Baghdasaryan <surenb@xxxxxxxxxx> [230705 13:24]:
> On Wed, Jul 5, 2023 at 10:14 AM David Hildenbrand <david@xxxxxxxxxx> wrote:
> >
> > On 05.07.23 19:12, Suren Baghdasaryan wrote:
> > > When forking a child process, parent write-protects an anonymous page
> > > and COW-shares it with the child being forked using copy_present_pte().
> > > Parent's TLB is flushed right before we drop the parent's mmap_lock in
> > > dup_mmap(). If we get a write-fault before that TLB flush in the parent,
> > > and we end up replacing that anonymous page in the parent process in
> > > do_wp_page() (because, COW-shared with the child), this might lead to
> > > some stale writable TLB entries targeting the wrong (old) page.
> > > Similar issue happened in the past with userfaultfd (see flush_tlb_page()
> > > call inside do_wp_page()).
> > > Lock VMAs of the parent process when forking a child, which prevents
> > > concurrent page faults during fork operation and avoids this issue.
> > > This fix can potentially regress some fork-heavy workloads. Kernel build
> > > time did not show noticeable regression on a 56-core machine while a
> > > stress test mapping 10000 VMAs and forking 5000 times in a tight loop
> > > shows ~5% regression. If such fork time regression is unacceptable,
> > > disabling CONFIG_PER_VMA_LOCK should restore its performance. Further
> > > optimizations are possible if this regression proves to be problematic.
> > >
> > > Suggested-by: David Hildenbrand <david@xxxxxxxxxx>
> > > Reported-by: Jiri Slaby <jirislaby@xxxxxxxxxx>
> > > Closes: https://lore.kernel.org/all/dbdef34c-3a07-5951-e1ae-e9c6e3cdf51b@xxxxxxxxxx/
> > > Reported-by: Holger Hoffstätte <holger@xxxxxxxxxxxxxxxxxxxxxx>
> > > Closes: https://lore.kernel.org/all/b198d649-f4bf-b971-31d0-e8433ec2a34c@xxxxxxxxxxxxxxxxxxxxxx/
> > > Reported-by: Jacob Young <jacobly.alt@xxxxxxxxx>
> > > Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217624
> > > Fixes: 0bff0aaea03e ("x86/mm: try VMA lock-based page fault handling first")
> > > Cc: stable@xxxxxxxxxxxxxxx
> > > Signed-off-by: Suren Baghdasaryan <surenb@xxxxxxxxxx>
> > > ---
> > >   kernel/fork.c | 6 ++++++
> > >   1 file changed, 6 insertions(+)
> > >
> > > diff --git a/kernel/fork.c b/kernel/fork.c
> > > index b85814e614a5..403bc2b72301 100644
> > > --- a/kernel/fork.c
> > > +++ b/kernel/fork.c
> > > @@ -658,6 +658,12 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
> > >               retval = -EINTR;
> > >               goto fail_uprobe_end;
> > >       }
> > > +#ifdef CONFIG_PER_VMA_LOCK
> > > +     /* Disallow any page faults before calling flush_cache_dup_mm */
> > > +     for_each_vma(old_vmi, mpnt)
> > > +             vma_start_write(mpnt);
> > > +     vma_iter_init(&old_vmi, oldmm, 0);

vma_iter_set(&old_vmi, 0) is probably what you want here.

> > > +#endif
> > >       flush_cache_dup_mm(oldmm);
> > >       uprobe_dup_mmap(oldmm, mm);
> > >       /*
> >
> > The old version was most probably fine as well, but this certainly looks
> > even safer.
> >
> > Acked-by: David Hildenbrand <david@xxxxxxxxxx>

I think this is overkill and believe setting the vma_start_write() will
synchronize with any readers since it's using the per-vma rw semaphore
in write mode. Anything faulting will need to finish before the fork
continues and faults during the fork will fall back to a read lock of
the mmap_lock.  Is there a possibility of populate happening outside the
mmap_write lock/vma_lock?

Was your benchmarking done with this loop at the start?

Thanks,
Liam





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux