On Thu, Jul 04, 2024 at 02:27:04PM GMT, Liam R. Howlett wrote: > Extract clean up of failed munmap() operations from > do_vmi_align_munmap(). This simplifies later patches in the series. > > Signed-off-by: Liam R. Howlett <Liam.Howlett@xxxxxxxxxx> > --- > mm/mmap.c | 25 ++++++++++++++++++++----- > 1 file changed, 20 insertions(+), 5 deletions(-) > > diff --git a/mm/mmap.c b/mm/mmap.c > index 28a46d9ddde0..d572e1ff8255 100644 > --- a/mm/mmap.c > +++ b/mm/mmap.c > @@ -2586,6 +2586,25 @@ struct vm_area_struct *vma_merge_extend(struct vma_iterator *vmi, > vma->vm_userfaultfd_ctx, anon_vma_name(vma)); > } > > +/* > + * abort_munmap_vmas - Undo any munmap work and free resources > + * > + * Reattach detached vmas, free up maple tree used to track the vmas. > + */ > +static inline void abort_munmap_vmas(struct ma_state *mas_detach) > +{ > + struct vm_area_struct *vma; > + int limit; > + > + limit = mas_detach->index; This feels like a change to existing behaviour actually, I mean a sensible one - as you are not just walking the tree start-to-end but rather only walking up to the point that it has been populated (assuming I'm not missing anything, looks to me like mas_for_each is _inclusive_ on max). Maybe worth mentioning in commit msg? > + mas_set(mas_detach, 0); > + /* Re-attach any detached VMAs */ > + mas_for_each(mas_detach, vma, limit) > + vma_mark_detached(vma, false); > + > + __mt_destroy(mas_detach->tree); > +} > + > /* > * do_vmi_align_munmap() - munmap the aligned region from @start to @end. > * @vmi: The vma iterator > @@ -2740,11 +2759,7 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma, > userfaultfd_error: > munmap_gather_failed: > end_split_failed: > - mas_set(&mas_detach, 0); > - mas_for_each(&mas_detach, next, end) > - vma_mark_detached(next, false); > - > - __mt_destroy(&mt_detach); > + abort_munmap_vmas(&mas_detach); > start_split_failed: > map_count_exceeded: > validate_mm(mm); > -- > 2.43.0 > This looks fine though, feel free to add: Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@xxxxxxxxxx>