Re: [PATCH v2 6/6] fork: Use __mt_dup() to duplicate maple tree in dup_mmap()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



* Peng Zhang <zhangpeng.00@xxxxxxxxxxxxx> [230915 06:57]:
> 
> 

...

> > > > +    if (unlikely(retval))
> > > >           goto out;
> > > >       mt_clear_in_rcu(vmi.mas.tree);
> > > > -    for_each_vma(old_vmi, mpnt) {
> > > > +    for_each_vma(vmi, mpnt) {
> > > >           struct file *file;
> > > >           vma_start_write(mpnt);
> > > >           if (mpnt->vm_flags & VM_DONTCOPY) {
> > > >               vm_stat_account(mm, mpnt->vm_flags, -vma_pages(mpnt));
> > > > +
> > > > +            /*
> > > > +             * Since the new tree is exactly the same as the old one,
> > > > +             * we need to remove the unneeded VMAs.
> > > > +             */
> > > > +            mas_store(&vmi.mas, NULL);
> > > > +
> > > > +            /*
> > > > +             * Even removing an entry may require memory allocation,
> > > > +             * and if removal fails, we use XA_ZERO_ENTRY to mark
> > > > +             * from which VMA it failed. The case of encountering
> > > > +             * XA_ZERO_ENTRY will be handled in exit_mmap().
> > > > +             */
> > > > +            if (unlikely(mas_is_err(&vmi.mas))) {
> > > > +                retval = xa_err(vmi.mas.node);
> > > > +                mas_reset(&vmi.mas);
> > > > +                if (mas_find(&vmi.mas, ULONG_MAX))
> > > > +                    mas_store(&vmi.mas, XA_ZERO_ENTRY);
> > > > +                goto loop_out;
> > > > +            }
> > > > +
> > > 
> > > Storing NULL may need extra space as you noted, so we need to be careful
> > > what happens if we don't have that space.  We should have a testcase to
> > > test this scenario.
> > > 
> > > mas_store_gfp() should be used with GFP_KERNEL.  The VMAs use GFP_KERNEL
> > > in this function, see vm_area_dup().
> > > 
> > > Don't use the exit_mmap() path to undo a failed fork.  You've added
> > > checks and complications to the exit path for all tasks in the very
> > > unlikely event that we run out of memory when we hit a very unlikely
> > > VM_DONTCOPY flag.
> > > 
> > > I see the issue with having a portion of the tree with new VMAs that are
> > > accounted and a portion of the tree that has old VMAs that should not be
> > > looked at.  It was clever to use the XA_ZERO_ENTRY as a stop point, but
> > > we cannot add that complication to the exit path and then there is the
> > > OOM race to worry about (maybe, I am not sure since this MM isn't
> > > active yet).
> > I encountered some errors after implementing the scheme you mentioned
> > below.

What were the errors?  Maybe I missed something or there is another way.

> > This would also clutter fork.c and mmap.c, as some internal
> > functions would need to be made global.

Could it not be a new function in mm/mmap.c and added to mm/internal.h
that does the accounting and VMA freeing from [0 - vma->vm_start)?

Maybe we could use it in the other areas that do this sort of work?
do_vmi_align_munmap() does something similar to what we need after the
"Point of no return".

> > 
> > I thought of another way to put everything into maple tree. In non-RCU
> > mode, we can remove the last half of the tree without allocating any
> > memory. This requires modifications to the internal implementation of
> > mas_store().
> > Then remove the second half of the tree like this:
> > 
> > mas.index = 0;
> Sorry, typo.
> Change to: mas.index = vma->start
> > mas.last = ULONGN_MAX;
> > mas_store(&mas, NULL).

Well, we know we are not in RCU mode here, but I am concerned about this
going poorly.

> 
> > 
> > At least in non-RCU mode, we can do this, since we only need to merge
> > some nodes, or move some items to adjacent nodes.
> > However, this will increase the workload significantly.

In the unlikely event of an issue allocating memory, this would be
unwelcome.  If we can avoid it, it would be best.  I don't mind being
slow in error paths, but a significant workload would be rather bad on
an overloaded system.

> > 
> > > 
> > > Using what is done in exit_mmap() and do_vmi_align_munmap() as a
> > > prototype, we can do something like the *untested* code below:
> > > 
> > > if (unlikely(mas_is_err(&vmi.mas))) {
> > >     unsigned long max = vmi.index;
> > > 
> > >     retval = xa_err(vmi.mas.node);
> > >     mas_set(&vmi.mas, 0);
> > >     tmp = mas_find(&vmi.mas, ULONG_MAX);
> > >     if (tmp) { /* Not the first VMA failed */
> > >         unsigned long nr_accounted = 0;
> > > 
> > >         unmap_region(mm, &vmi.mas, vma, NULL, mpnt, 0, max, max,
> > >                 true);
> > >         do {
> > >             if (vma->vm_flags & VM_ACCOUNT)
> > >                 nr_accounted += vma_pages(vma);
> > >             remove_vma(vma, true);
> > >             cond_resched();
> > >             vma = mas_find(&vmi.mas, max - 1);
> > >         } while (vma != NULL);
> > > 
> > >         vm_unacct_memory(nr_accounted);
> > >     }
> > >     __mt_destroy(&mm->mm_mt);
> > >     goto loop_out;
> > > }
> > > 
> > > Once exit_mmap() is called, the check for OOM (no vma) will catch that
> > > nothing is left to do.
> > > 
> > > It might be worth making an inline function to do this to keep the fork
> > > code clean.  We should test this by detecting a specific task name and
> > > returning a failure at a given interval:
> > > 
> > > if (!strcmp(current->comm, "fork_test") {
> > > ...
> > > }
> > > 
...


Thanks,
Liam




[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux