Re: [PATCH v4 18/33] mm: write-lock VMAs before removing them from VMA tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Mar 1, 2023 at 10:34 AM Suren Baghdasaryan <surenb@xxxxxxxxxx> wrote:
>
> On Tue, Feb 28, 2023 at 11:57 PM Hyeonggon Yoo <42.hyeyoo@xxxxxxxxx> wrote:
> >
> > On Wed, Mar 01, 2023 at 07:43:33AM +0000, Hyeonggon Yoo wrote:
> > > On Mon, Feb 27, 2023 at 09:36:17AM -0800, Suren Baghdasaryan wrote:
> > > > Write-locking VMAs before isolating them ensures that page fault
> > > > handlers don't operate on isolated VMAs.
> > > >
> > > > Signed-off-by: Suren Baghdasaryan <surenb@xxxxxxxxxx>
> > > > ---
> > > >  mm/mmap.c  | 1 +
> > > >  mm/nommu.c | 5 +++++
> > > >  2 files changed, 6 insertions(+)
> > > >
> > > > diff --git a/mm/mmap.c b/mm/mmap.c
> > > > index 1f42b9a52b9b..f7ed357056c4 100644
> > > > --- a/mm/mmap.c
> > > > +++ b/mm/mmap.c
> > > > @@ -2255,6 +2255,7 @@ int split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
> > > >  static inline int munmap_sidetree(struct vm_area_struct *vma,
> > > >                                struct ma_state *mas_detach)
> > > >  {
> > > > +   vma_start_write(vma);
> > > >     mas_set_range(mas_detach, vma->vm_start, vma->vm_end - 1);
> > >
> > > I may be missing something, but have few questions:
> > >
> > >       1) Why does a writer need to both write-lock a VMA and mark the VMA detached
> > >          when unmapping it, isn't it enough to just only write-lock a VMA?
>
> We need to mark the VMA detached to avoid handling page fault in a
> detached VMA. The possible scenario is:
>
> lock_vma_under_rcu
>   vma = mas_walk(&mas)
>                                                         munmap_sidetree
>                                                           vma_start_write(vma)
>
> mas_store_gfp() // remove VMA from the tree
>                                                           vma_end_write_all()
>   vma_start_read(vma)
>   // we locked the VMA but it is not part of the tree anymore.
>
> So, marking the VMA locked before vma_end_write_all() and checking
> vma->detached after vma_start_read() helps us avoid handling faults in
> the detached VMA.
>
>
> > >
> > >       2) as VMAs that are going to be removed are already locked in vma_prepare(),
> > >          so I think this hunk could be dropped?
> >
> > After sending this just realized that I did not consider simple munmap case :)
> > But I still think 1) and 3) are valid question.
> >
> > >
> > > >     if (mas_store_gfp(mas_detach, vma, GFP_KERNEL))
> > > >             return -ENOMEM;
> > > > diff --git a/mm/nommu.c b/mm/nommu.c
> > > > index 57ba243c6a37..2ab162d773e2 100644
> > > > --- a/mm/nommu.c
> > > > +++ b/mm/nommu.c
> > > > @@ -588,6 +588,7 @@ static int delete_vma_from_mm(struct vm_area_struct *vma)
> > > >                    current->pid);
> > > >             return -ENOMEM;
> > > >     }
> > > > +   vma_start_write(vma);
> > > >     cleanup_vma_from_mm(vma);
> > >
> > >       3) I think this hunk could be dropped as Per-VMA lock depends on MMU anyway.
>
> Ah, yes, you are right. We can safely remove the changes in nommu.c
> Andrew, should I post a fixup or you can make the removal directly in
> mm-unstable?

I went ahead and posted the fixup for this at:
https://lore.kernel.org/all/20230301190457.1498985-1-surenb@xxxxxxxxxx/

> Thanks,
> Suren.
>
> > >
> > > Thanks,
> > > Hyeonggon
> > >
> > > >
> > > >     /* remove from the MM's tree and list */
> > > > @@ -1519,6 +1520,10 @@ void exit_mmap(struct mm_struct *mm)
> > > >      */
> > > >     mmap_write_lock(mm);
> > > >     for_each_vma(vmi, vma) {
> > > > +           /*
> > > > +            * No need to lock VMA because this is the only mm user and no
> > > > +            * page fault handled can race with it.
> > > > +            */
> > > >             cleanup_vma_from_mm(vma);
> > > >             delete_vma(mm, vma);
> > > >             cond_resched();
> > > > --
> > > > 2.39.2.722.g9855ee24e9-goog
> > > >
> > > >
> > >
> >
> > --
> > To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@xxxxxxxxxxx.
> >





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux