On 10/25/24 14:26, Lorenzo Stoakes wrote: > Rather than trying to merge again when ostensibly allocating a new VMA, > instead defer until the VMA is added and attempt to merge the existing > range. > > This way we have no complicated unwinding logic midway through the process > of mapping the VMA. > > In addition this removes limitations on the VMA not being able to be the > first in the virtual memory address space which was previously implicitly > required. > > In theory, for this very same reason, we should unconditionally attempt > merge here, however this is likely to have a performance impact so it is > better to avoid this given the unlikely outcome of a merge. > > Reviewed-by: Vlastimil Babka <vbabka@xxxxxxx> > Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@xxxxxxxxxx> > --- > mm/vma.c | 55 ++++++++++++++----------------------------------------- > 1 file changed, 14 insertions(+), 41 deletions(-) > > diff --git a/mm/vma.c b/mm/vma.c > index 7c690be67910..7194f9449c60 100644 > --- a/mm/vma.c > +++ b/mm/vma.c > @@ -19,6 +19,7 @@ struct mmap_state { > struct file *file; > > unsigned long charged; > + bool retry_merge; > > struct vm_area_struct *prev; > struct vm_area_struct *next; > @@ -2278,8 +2279,9 @@ static int __mmap_prepare(struct mmap_state *map, struct list_head *uf) > return 0; > } > > + > static int __mmap_new_file_vma(struct mmap_state *map, > - struct vm_area_struct **vmap, bool *mergedp) > + struct vm_area_struct **vmap) AFAICS **vmap could become *vma now > { > struct vma_iterator *vmi = map->vmi; > struct vm_area_struct *vma = *vmap;