The may_expand_vm() check requires the count of the pages within the munmap range. Since this is needed for accounting and obtained later, the reodering of may_expand_vm() to later in the call stack, after the vma munmap struct (vms) is initialised and the gather stage is potentially run, will allow for a single loop over the vmas. The gather sage does not commit any work and so everything can be undone in the case of a failure. The MAP_FIXED page count is available after the vms_gather_munmap_vmas() call, so use it instead of looping over the vmas twice. Signed-off-by: Liam R. Howlett <Liam.Howlett@xxxxxxxxxx> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@xxxxxxxxxx> --- mm/mmap.c | 15 ++++----------- 1 file changed, 4 insertions(+), 11 deletions(-) diff --git a/mm/mmap.c b/mm/mmap.c index d2f47cd66650..e0c321b7c642 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1376,17 +1376,6 @@ unsigned long mmap_region(struct file *file, unsigned long addr, pgoff_t vm_pgoff; int error = -ENOMEM; VMA_ITERATOR(vmi, mm, addr); - unsigned long nr_pages, nr_accounted; - - nr_pages = count_vma_pages_range(mm, addr, end, &nr_accounted); - - /* - * Check against address space limit. - * MAP_FIXED may remove pages of mappings that intersects with requested - * mapping. Account for the pages it would unmap. - */ - if (!may_expand_vm(mm, vm_flags, pglen - nr_pages)) - return -ENOMEM; /* Find the first overlapping VMA */ @@ -1414,6 +1403,10 @@ unsigned long mmap_region(struct file *file, unsigned long addr, vma_iter_next_range(&vmi); } + /* Check against address space limit. */ + if (!may_expand_vm(mm, vm_flags, pglen - vms.nr_pages)) + goto abort_munmap; + /* * Private writable mapping: check memory availability */ -- 2.45.2