When merging of the previous VMA fails after the vma iterator has been moved to the previous entry, the vma iterator must be advanced to ensure the caller takes the correct action on the next vma iterator event. Fix this by adding a vma_next() call to the error path. Users may experience higher CPU usage, most likely in very low memory situations. Link: https://lore.kernel.org/linux-mm/CAG48ez12VN1JAOtTNMY+Y2YnsU45yL5giS-Qn=ejtiHpgJAbdQ@xxxxxxxxxxxxxx/ Closes: https://lore.kernel.org/linux-mm/CAG48ez12VN1JAOtTNMY+Y2YnsU45yL5giS-Qn=ejtiHpgJAbdQ@xxxxxxxxxxxxxx/ Fixes: 18b098af2890 ("vma_merge: set vma iterator to correct position.") Cc: stable@xxxxxxxxxxxxxxx Cc: Jann Horn <jannh@xxxxxxxxxx> Signed-off-by: Liam R. Howlett <Liam.Howlett@xxxxxxxxxx> --- mm/mmap.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/mm/mmap.c b/mm/mmap.c index b56a7f0c9f85..b5bc4ca9bdc4 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -968,14 +968,14 @@ struct vm_area_struct *vma_merge(struct vma_iterator *vmi, struct mm_struct *mm, vma_pgoff = curr->vm_pgoff; vma_start_write(curr); remove = curr; - err = dup_anon_vma(next, curr); + err = dup_anon_vma(next, curr, &anon_dup); } } } /* Error in anon_vma clone. */ if (err) - return NULL; + goto anon_vma_fail; if (vma_start < vma->vm_start || vma_end > vma->vm_end) vma_expanded = true; @@ -988,7 +988,7 @@ struct vm_area_struct *vma_merge(struct vma_iterator *vmi, struct mm_struct *mm, } if (vma_iter_prealloc(vmi, vma)) - return NULL; + goto prealloc_fail; init_multi_vma_prep(&vp, vma, adjust, remove, remove2); VM_WARN_ON(vp.anon_vma && adjust && adjust->anon_vma && @@ -1016,6 +1016,12 @@ struct vm_area_struct *vma_merge(struct vma_iterator *vmi, struct mm_struct *mm, vma_complete(&vp, vmi, mm); khugepaged_enter_vma(res, vm_flags); return res; + +prealloc_fail: +anon_vma_fail: + if (merge_prev) + vma_next(vmi); + return NULL; } /* -- 2.40.1