Re: [RFC PATCH RESEND 15/28] mm/mmap: mark adjacent VMAs as locked if they can grow into unmapped area

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Sep 9, 2022 at 6:43 AM Laurent Dufour <ldufour@xxxxxxxxxxxxx> wrote:
>
> Le 01/09/2022 à 19:35, Suren Baghdasaryan a écrit :
> > While unmapping VMAs, adjacent VMAs might be able to grow into the area
> > being unmapped. In such cases mark adjacent VMAs as locked to prevent
> > this growth.
> >
> > Signed-off-by: Suren Baghdasaryan <surenb@xxxxxxxxxx>
> > ---
> >  mm/mmap.c | 8 ++++++--
> >  1 file changed, 6 insertions(+), 2 deletions(-)
> >
> > diff --git a/mm/mmap.c b/mm/mmap.c
> > index b0d78bdc0de0..b31cc97c2803 100644
> > --- a/mm/mmap.c
> > +++ b/mm/mmap.c
> > @@ -2680,10 +2680,14 @@ detach_vmas_to_be_unmapped(struct mm_struct *mm, struct vm_area_struct *vma,
> >        * VM_GROWSUP VMA. Such VMAs can change their size under
> >        * down_read(mmap_lock) and collide with the VMA we are about to unmap.
> >        */
> > -     if (vma && (vma->vm_flags & VM_GROWSDOWN))
> > +     if (vma && (vma->vm_flags & VM_GROWSDOWN)) {
> > +             vma_mark_locked(vma);
> >               return false;
> > -     if (prev && (prev->vm_flags & VM_GROWSUP))
> > +     }
> > +     if (prev && (prev->vm_flags & VM_GROWSUP)) {
> > +             vma_mark_locked(prev);
> >               return false;
> > +     }
> >       return true;
> >  }
> >
>
> That looks right to be.
>
> But, in addition to that, like the previous patch, all the VMAs to be
> detached from the tree in the loop above, should be marked locked just
> before calling vm_rb_erase().

The following call chain already locks the VMA being isolated:
vma_rb_erase->vma_rb_erase_ignore->__vma_rb_erase->vma_mark_locked





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux