On Fri, May 31, 2024 at 4:12 AM Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote: > > > The patch titled > Subject: mm/vmalloc: fix vbq->free breakage > has been added to the -mm mm-hotfixes-unstable branch. Its filename is > mm-vmalloc-fix-vbq-free-breakage.patch > > This patch will shortly appear at > https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-vmalloc-fix-vbq-free-breakage.patch > > This patch will later appear in the mm-hotfixes-unstable branch at > git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm > > Before you just go and hit "reply", please: > a) Consider who else should be cc'ed > b) Prefer to cc a suitable mailing list as well > c) Ideally: find the original patch on the mailing list and do a > reply-to-all to that, adding suitable additional cc's > > *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** > > The -mm tree is included into linux-next via the mm-everything > branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm > and is updated there every 2-3 working days > > ------------------------------------------------------ > From: "hailong.liu" <hailong.liu@xxxxxxxx> > Subject: mm/vmalloc: fix vbq->free breakage > Date: Thu, 30 May 2024 17:31:08 +0800 > > The function xa_for_each() in _vm_unmap_aliases() loops through all vbs. > However, since commit 062eacf57ad9 ("mm: vmalloc: remove a global > vmap_blocks xarray") the vb from xarray may not be on the corresponding > CPU vmap_block_queue. Consequently, purge_fragmented_block() might use > the wrong vbq->lock to protect the free list, leading to vbq->free > breakage. > > Link: https://lkml.kernel.org/r/20240530093108.4512-1-hailong.liu@xxxxxxxx > Fixes: fc1e0d980037 ("mm/vmalloc: prevent stale TLBs in fully utilized blocks") > Signed-off-by: Hailong.Liu <liuhailong@xxxxxxxx> > Reported-by: Guangye Yang <guangye.yang@xxxxxxxxxxxx> > Cc: Barry Song <21cnbao@xxxxxxxxx> > Cc: Christoph Hellwig <hch@xxxxxxxxxxxxx> > Cc: Gao Xiang <xiang@xxxxxxxxxx> > Cc: Guangye Yang <guangye.yang@xxxxxxxxxxxx> > Cc: liuhailong <liuhailong@xxxxxxxx> > Cc: Lorenzo Stoakes <lstoakes@xxxxxxxxx> > Cc: Uladzislau Rezki (Sony) <urezki@xxxxxxxxx> > Cc: Zhaoyang Huang <zhaoyang.huang@xxxxxxxxxx> > Cc: <stable@xxxxxxxxxxxxxxx> > Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > --- > > mm/vmalloc.c | 3 +-- > 1 file changed, 1 insertion(+), 2 deletions(-) > > --- a/mm/vmalloc.c~mm-vmalloc-fix-vbq-free-breakage > +++ a/mm/vmalloc.c > @@ -2830,10 +2830,9 @@ static void _vm_unmap_aliases(unsigned l > for_each_possible_cpu(cpu) { > struct vmap_block_queue *vbq = &per_cpu(vmap_block_queue, cpu); > struct vmap_block *vb; > - unsigned long idx; > > rcu_read_lock(); > - xa_for_each(&vbq->vmap_blocks, idx, vb) { > + list_for_each_entry_rcu(vb, &vbq->free, free_list) { No, this is wrong as the fully used vb's TLB will be kept since they are not on the vbq->free. I have sent Patchv2 out. > spin_lock(&vb->lock); > > /* > _ > > Patches currently in -mm which might be from hailong.liu@xxxxxxxx are > > mm-vmalloc-fix-vbq-free-breakage.patch > >