The patch titled Subject: mm/vmalloc: check free space in vmap_block lockless has been added to the -mm mm-unstable branch. Its filename is mm-vmalloc-check-free-space-in-vmap_block-lockless.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-vmalloc-check-free-space-in-vmap_block-lockless.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Subject: mm/vmalloc: check free space in vmap_block lockless Date: Thu, 25 May 2023 14:57:07 +0200 (CEST) vb_alloc() unconditionally locks a vmap_block on the free list to check the free space. This can be done locklessly because vmap_block::free never increases, it's only decreased on allocations. Check the free space lockless and only if that succeeds, recheck under the lock. Link: https://lkml.kernel.org/r/20230525124504.750481992@xxxxxxxxxxxxx Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Reviewed-by: Uladzislau Rezki (Sony) <urezki@xxxxxxxxx> Cc: Baoquan He <bhe@xxxxxxxxxx> Cc: Christoph Hellwig <hch@xxxxxx> Cc: Lorenzo Stoakes <lstoakes@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/vmalloc.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) --- a/mm/vmalloc.c~mm-vmalloc-check-free-space-in-vmap_block-lockless +++ a/mm/vmalloc.c @@ -2168,6 +2168,9 @@ static void *vb_alloc(unsigned long size list_for_each_entry_rcu(vb, &vbq->free, free_list) { unsigned long pages_off; + if (READ_ONCE(vb->free) < (1UL << order)) + continue; + spin_lock(&vb->lock); if (vb->free < (1UL << order)) { spin_unlock(&vb->lock); @@ -2176,7 +2179,7 @@ static void *vb_alloc(unsigned long size pages_off = VMAP_BBMAP_BITS - vb->free; vaddr = vmap_block_vaddr(vb->va->va_start, pages_off); - vb->free -= 1UL << order; + WRITE_ONCE(vb->free, vb->free - (1UL << order)); bitmap_set(vb->used_map, pages_off, (1UL << order)); if (vb->free == 0) { spin_lock(&vbq->lock); _ Patches currently in -mm which might be from tglx@xxxxxxxxxxxxx are mm-vmalloc-prevent-stale-tlbs-in-fully-utilized-blocks.patch mm-vmalloc-avoid-iterating-over-per-cpu-vmap-blocks-twice.patch mm-vmalloc-prevent-flushing-dirty-space-over-and-over.patch mm-vmalloc-check-free-space-in-vmap_block-lockless.patch mm-vmalloc-add-missing-read-write_once-annotations.patch mm-vmalloc-dont-purge-usable-blocks-unnecessarily.patch