This is a note to let you know that I've just added the patch titled mm/mmap: Fix extra maple tree write to the 6.1-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: mm-mmap-fix-extra-maple-tree-write.patch and it can be found in the queue-6.1 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From Liam.Howlett@xxxxxxxxxx Sun Jul 16 17:02:51 2023 From: "Liam R. Howlett" <Liam.Howlett@xxxxxxxxxx> Date: Thu, 6 Jul 2023 14:51:35 -0400 Subject: mm/mmap: Fix extra maple tree write To: linux-kernel@xxxxxxxxxxxxxxx Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>, "Liam R. Howlett" <Liam.Howlett@xxxxxxxxxx>, John Hsu <John.Hsu@xxxxxxxxxxxx>, stable@xxxxxxxxxxxxxxx, linux-mm@xxxxxxxxx Message-ID: <20230706185135.2235532-1-Liam.Howlett@xxxxxxxxxx> From: "Liam R. Howlett" <Liam.Howlett@xxxxxxxxxx> based on commit 0503ea8f5ba73eb3ab13a81c1eefbaf51405385a upstream. This was inadvertently fixed during the removal of __vma_adjust(). When __vma_adjust() is adjusting next with a negative value (pushing vma->vm_end lower), there would be two writes to the maple tree. The first write is unnecessary and uses all allocated nodes in the maple state. The second write is necessary but will need to allocate nodes since the first write has used the allocated nodes. This may be a problem as it may not be safe to allocate at this time, such as a low memory situation. Fix the issue by avoiding the first write and only write the adjusted "next" VMA. Reported-by: John Hsu <John.Hsu@xxxxxxxxxxxx> Link: https://lore.kernel.org/lkml/9cb8c599b1d7f9c1c300d1a334d5eb70ec4d7357.camel@xxxxxxxxxxxx/ Cc: stable@xxxxxxxxxxxxxxx Cc: linux-mm@xxxxxxxxx Signed-off-by: Liam R. Howlett <Liam.Howlett@xxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- mm/mmap.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) --- a/mm/mmap.c +++ b/mm/mmap.c @@ -767,7 +767,8 @@ int __vma_adjust(struct vm_area_struct * } if (end != vma->vm_end) { if (vma->vm_end > end) { - if (!insert || (insert->vm_start != end)) { + if ((vma->vm_end + adjust_next != end) && + (!insert || (insert->vm_start != end))) { vma_mas_szero(&mas, end, vma->vm_end); mas_reset(&mas); VM_WARN_ON(insert && Patches currently in stable-queue which might be from Liam.Howlett@xxxxxxxxxx are queue-6.1/mm-mmap-fix-extra-maple-tree-write.patch queue-6.1/mm-mmap-fix-vm_locked-check-in-do_vmi_align_munmap.patch