Re: [PATCH v3 05/16] mm/mmap: Introduce vma_munmap_struct for use in munmap operations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



* Suren Baghdasaryan <surenb@xxxxxxxxxx> [240710 12:07]:
> On Fri, Jul 5, 2024 at 12:09 PM Liam R. Howlett <Liam.Howlett@xxxxxxxxxx> wrote:
> >
> > * Lorenzo Stoakes <lorenzo.stoakes@xxxxxxxxxx> [240705 14:39]:
> > > On Thu, Jul 04, 2024 at 02:27:07PM GMT, Liam R. Howlett wrote:
> > > > Use a structure to pass along all the necessary information and counters
> > > > involved in removing vmas from the mm_struct.
> > > >
> > > > Update vmi_ function names to vms_ to indicate the first argument
> > > > type change.
> > > >
> > > > Signed-off-by: Liam R. Howlett <Liam.Howlett@xxxxxxxxxx>
> > > > Reviewed-by: Suren Baghdasaryan <surenb@xxxxxxxxxx>
> > > > ---
> > > >  mm/internal.h |  16 ++++++
> > > >  mm/mmap.c     | 137 ++++++++++++++++++++++++++------------------------
> > > >  2 files changed, 88 insertions(+), 65 deletions(-)
> > > >
> > > > diff --git a/mm/internal.h b/mm/internal.h
> > > > index 2ea9a88dcb95..f1e6dea2efcf 100644
> > > > --- a/mm/internal.h
> > > > +++ b/mm/internal.h
> > > > @@ -1481,6 +1481,22 @@ struct vma_prepare {
> > > >     struct vm_area_struct *remove2;
> > > >  };
> > > >
> > > > +/*
> > > > + * vma munmap operation
> > > > + */
> > > > +struct vma_munmap_struct {
> > > > +   struct vma_iterator *vmi;
> > > > +   struct mm_struct *mm;
> > > > +   struct vm_area_struct *vma;     /* The first vma to munmap */
> > > > +   struct list_head *uf;           /* Userfaultfd list_head */
> > > > +   unsigned long start;            /* Aligned start addr */
> > > > +   unsigned long end;              /* Aligned end addr */
> > > > +   int vma_count;                  /* Number of vmas that will be removed */
> > > > +   unsigned long nr_pages;         /* Number of pages being removed */
> > > > +   unsigned long locked_vm;        /* Number of locked pages */
> > > > +   bool unlock;                    /* Unlock after the munmap */
> > > > +};
> > >
> > >
> > > I'm a big fan of breaking out and threading state like this through some of
> > > these more... verbose VMA functions.
> > >
> > > I have a similar idea as part of my long dreamed of 'delete vma_merge()'
> > > patch set. Coming soon :)
> > >
> > > > +
> > > >  void __meminit __init_single_page(struct page *page, unsigned long pfn,
> > > >                             unsigned long zone, int nid);
> > > >
> > > > diff --git a/mm/mmap.c b/mm/mmap.c
> > > > index 8dc8ffbf9d8d..76e93146ee9d 100644
> > > > --- a/mm/mmap.c
> > > > +++ b/mm/mmap.c
> > > > @@ -500,6 +500,31 @@ static inline void init_vma_prep(struct vma_prepare *vp,
> > > >     init_multi_vma_prep(vp, vma, NULL, NULL, NULL);
> > > >  }
> > > >
> > > > +/*
> > > > + * init_vma_munmap() - Initializer wrapper for vma_munmap_struct
> > > > + * @vms: The vma munmap struct
> > > > + * @vmi: The vma iterator
> > > > + * @vma: The first vm_area_struct to munmap
> > > > + * @start: The aligned start address to munmap
> > > > + * @end: The aligned end address to munmap
> > >
> > > Maybe worth mentioning if inclusive/exclusive.
> >
> > The "address to munmap" isn't specific enough that we are using the same
> > logic as the munmap call?  That is, the vma inclusive and exclusive for
> > start and end, respectively.
> >
> > Not a big change, either way.
> 
> +1. Every time I look into these functions with start/end I have to go
> back and check these inclusive/exclusive rules, so mentioning it would
> be helpful.

I am making this clear with the follow in v4:
+       unsigned long start;            /* Aligned start addr (inclusive) */
+       unsigned long end;              /* Aligned end addr (exclusive) */

Any time we deal with the vma it is like this, the maple tree is
inclusive/inclusive.





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux