On 9/29/23 20:30, Liam R. Howlett wrote: > During the error path, the vma iterator may not be correctly positioned > or set to the correct range. Undo the vma_prev() call by resetting to > the passed in address. Re-walking to the same range will fix the range > to the area previously passed in. > > Users would notice increased cycles as vma_merge() would be called an > extra time with vma == prev, and thus would fail to merge and return. > > Link: https://lore.kernel.org/linux-mm/CAG48ez12VN1JAOtTNMY+Y2YnsU45yL5giS-Qn=ejtiHpgJAbdQ@xxxxxxxxxxxxxx/ > Closes: https://lore.kernel.org/linux-mm/CAG48ez12VN1JAOtTNMY+Y2YnsU45yL5giS-Qn=ejtiHpgJAbdQ@xxxxxxxxxxxxxx/ > Fixes: 18b098af2890 ("vma_merge: set vma iterator to correct position.") > Cc: stable@xxxxxxxxxxxxxxx > Cc: Jann Horn <jannh@xxxxxxxxxx> > Signed-off-by: Liam R. Howlett <Liam.Howlett@xxxxxxxxxx> Acked-by: Vlastimil Babka <vbabka@xxxxxxx> > --- > mm/mmap.c | 10 ++++++++-- > 1 file changed, 8 insertions(+), 2 deletions(-) > > diff --git a/mm/mmap.c b/mm/mmap.c > index b56a7f0c9f85..acb7dea49e23 100644 > --- a/mm/mmap.c > +++ b/mm/mmap.c > @@ -975,7 +975,7 @@ struct vm_area_struct *vma_merge(struct vma_iterator *vmi, struct mm_struct *mm, > > /* Error in anon_vma clone. */ > if (err) > - return NULL; > + goto anon_vma_fail; > > if (vma_start < vma->vm_start || vma_end > vma->vm_end) > vma_expanded = true; > @@ -988,7 +988,7 @@ struct vm_area_struct *vma_merge(struct vma_iterator *vmi, struct mm_struct *mm, > } > > if (vma_iter_prealloc(vmi, vma)) > - return NULL; > + goto prealloc_fail; > > init_multi_vma_prep(&vp, vma, adjust, remove, remove2); > VM_WARN_ON(vp.anon_vma && adjust && adjust->anon_vma && > @@ -1016,6 +1016,12 @@ struct vm_area_struct *vma_merge(struct vma_iterator *vmi, struct mm_struct *mm, > vma_complete(&vp, vmi, mm); > khugepaged_enter_vma(res, vm_flags); > return res; > + > +prealloc_fail: > +anon_vma_fail: > + vma_iter_set(vmi, addr); > + vma_iter_load(vmi); > + return NULL; > } > > /*