The patch titled Subject: mm: move may_expand_vm() check in mmap_region() has been added to the -mm mm-unstable branch. Its filename is mm-move-may_expand_vm-check-in-mmap_region.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-move-may_expand_vm-check-in-mmap_region.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: "Liam R. Howlett" <Liam.Howlett@xxxxxxxxxx> Subject: mm: move may_expand_vm() check in mmap_region() Date: Thu, 22 Aug 2024 15:25:41 -0400 The may_expand_vm() check requires the count of the pages within the munmap range. Since this is needed for accounting and obtained later, the reodering of ma_expand_vm() to later in the call stack, after the vma munmap struct (vms) is initialised and the gather stage is potentially run, will allow for a single loop over the vmas. The gather sage does not commit any work and so everything can be undone in the case of a failure. The MAP_FIXED page count is available after the vms_gather_munmap_vmas() call, so use it instead of looping over the vmas twice. Link: https://lkml.kernel.org/r/20240822192543.3359552-20-Liam.Howlett@xxxxxxxxxx Signed-off-by: Liam R. Howlett <Liam.Howlett@xxxxxxxxxx> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@xxxxxxxxxx> Cc: Bert Karwatzki <spasswolf@xxxxxx> Cc: Jiri Olsa <olsajiri@xxxxxxxxx> Cc: Kees Cook <kees@xxxxxxxxxx> Cc: Lorenzo Stoakes <lstoakes@xxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: "Paul E. McKenney" <paulmck@xxxxxxxxxx> Cc: Paul Moore <paul@xxxxxxxxxxxxxx> Cc: Sidhartha Kumar <sidhartha.kumar@xxxxxxxxxx> Cc: Suren Baghdasaryan <surenb@xxxxxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/mmap.c | 15 ++++----------- mm/vma.c | 21 --------------------- mm/vma.h | 3 --- 3 files changed, 4 insertions(+), 35 deletions(-) --- a/mm/mmap.c~mm-move-may_expand_vm-check-in-mmap_region +++ a/mm/mmap.c @@ -1376,17 +1376,6 @@ unsigned long mmap_region(struct file *f pgoff_t vm_pgoff; int error = -ENOMEM; VMA_ITERATOR(vmi, mm, addr); - unsigned long nr_pages, nr_accounted; - - nr_pages = count_vma_pages_range(mm, addr, end, &nr_accounted); - - /* - * Check against address space limit. - * MAP_FIXED may remove pages of mappings that intersects with requested - * mapping. Account for the pages it would unmap. - */ - if (!may_expand_vm(mm, vm_flags, pglen - nr_pages)) - return -ENOMEM; /* Find the first overlapping VMA */ vma = vma_find(&vmi, end); @@ -1409,6 +1398,10 @@ unsigned long mmap_region(struct file *f vma_iter_next_range(&vmi); } + /* Check against address space limit. */ + if (!may_expand_vm(mm, vm_flags, pglen - vms.nr_pages)) + goto abort_munmap; + /* * Private writable mapping: check memory availability */ --- a/mm/vma.c~mm-move-may_expand_vm-check-in-mmap_region +++ a/mm/vma.c @@ -1645,27 +1645,6 @@ bool vma_wants_writenotify(struct vm_are return vma_fs_can_writeback(vma); } -unsigned long count_vma_pages_range(struct mm_struct *mm, - unsigned long addr, unsigned long end, - unsigned long *nr_accounted) -{ - VMA_ITERATOR(vmi, mm, addr); - struct vm_area_struct *vma; - unsigned long nr_pages = 0; - - *nr_accounted = 0; - for_each_vma_range(vmi, vma, end) { - unsigned long vm_start = max(addr, vma->vm_start); - unsigned long vm_end = min(end, vma->vm_end); - - nr_pages += PHYS_PFN(vm_end - vm_start); - if (vma->vm_flags & VM_ACCOUNT) - *nr_accounted += PHYS_PFN(vm_end - vm_start); - } - - return nr_pages; -} - static DEFINE_MUTEX(mm_all_locks_mutex); static void vm_lock_anon_vma(struct mm_struct *mm, struct anon_vma *anon_vma) --- a/mm/vma.h~mm-move-may_expand_vm-check-in-mmap_region +++ a/mm/vma.h @@ -315,9 +315,6 @@ bool vma_wants_writenotify(struct vm_are int mm_take_all_locks(struct mm_struct *mm); void mm_drop_all_locks(struct mm_struct *mm); -unsigned long count_vma_pages_range(struct mm_struct *mm, - unsigned long addr, unsigned long end, - unsigned long *nr_accounted); static inline bool vma_wants_manual_pte_write_upgrade(struct vm_area_struct *vma) { _ Patches currently in -mm which might be from Liam.Howlett@xxxxxxxxxx are maple_tree-remove-rcu_read_lock-from-mt_validate.patch mm-vma-correctly-position-vma_iterator-in-__split_vma.patch mm-vma-introduce-abort_munmap_vmas.patch mm-vma-introduce-vmi_complete_munmap_vmas.patch mm-vma-extract-the-gathering-of-vmas-from-do_vmi_align_munmap.patch mm-vma-introduce-vma_munmap_struct-for-use-in-munmap-operations.patch mm-vma-change-munmap-to-use-vma_munmap_struct-for-accounting-and-surrounding-vmas.patch mm-vma-change-munmap-to-use-vma_munmap_struct-for-accounting-and-surrounding-vmas-fix.patch mm-vma-extract-validate_mm-from-vma_complete.patch mm-vma-inline-munmap-operation-in-mmap_region.patch mm-vma-expand-mmap_region-munmap-call.patch mm-vma-support-vma-==-null-in-init_vma_munmap.patch mm-mmap-reposition-vma-iterator-in-mmap_region.patch mm-vma-track-start-and-end-for-munmap-in-vma_munmap_struct.patch mm-clean-up-unmap_region-argument-list.patch mm-mmap-avoid-zeroing-vma-tree-in-mmap_region.patch mm-change-failure-of-map_fixed-to-restoring-the-gap-on-failure.patch mm-mmap-use-phys_pfn-in-mmap_region.patch mm-mmap-use-vms-accounted-pages-in-mmap_region.patch ipc-shm-mm-drop-do_vma_munmap.patch mm-move-may_expand_vm-check-in-mmap_region.patch mm-vma-drop-incorrect-comment-from-vms_gather_munmap_vmas.patch mm-vmah-optimise-vma_munmap_struct.patch