On 05/31/24 at 10:04am, Uladzislau Rezki wrote: > On Fri, May 31, 2024 at 11:05:20AM +0800, zhaoyang.huang wrote: > > From: Zhaoyang Huang <zhaoyang.huang@xxxxxxxxxx> > > > > vmalloc area runs out in our ARM64 system during an erofs test as > > vm_map_ram failed[1]. By following the debug log, we find that > > vm_map_ram()->vb_alloc() will allocate new vb->va which corresponding > > to 4MB vmalloc area as list_for_each_entry_rcu returns immediately > > when vbq->free->next points to vbq->free. That is to say, 65536 times > > of page fault after the list's broken will run out of the whole > > vmalloc area. This should be introduced by one vbq->free->next point to > > vbq->free which makes list_for_each_entry_rcu can not iterate the list > > and find the BUG. > > > > [1] > > PID: 1 TASK: ffffff80802b4e00 CPU: 6 COMMAND: "init" > > #0 [ffffffc08006afe0] __switch_to at ffffffc08111d5cc > > #1 [ffffffc08006b040] __schedule at ffffffc08111dde0 > > #2 [ffffffc08006b0a0] schedule at ffffffc08111e294 > > #3 [ffffffc08006b0d0] schedule_preempt_disabled at ffffffc08111e3f0 > > #4 [ffffffc08006b140] __mutex_lock at ffffffc08112068c > > #5 [ffffffc08006b180] __mutex_lock_slowpath at ffffffc08111f8f8 > > #6 [ffffffc08006b1a0] mutex_lock at ffffffc08111f834 > > #7 [ffffffc08006b1d0] reclaim_and_purge_vmap_areas at ffffffc0803ebc3c > > #8 [ffffffc08006b290] alloc_vmap_area at ffffffc0803e83fc > > #9 [ffffffc08006b300] vm_map_ram at ffffffc0803e78c0 > > > > Fixes: fc1e0d980037 ("mm/vmalloc: prevent stale TLBs in fully utilized blocks") > > > > Suggested-by: Hailong.Liu <hailong.liu@xxxxxxxx> > > Signed-off-by: Zhaoyang Huang <zhaoyang.huang@xxxxxxxxxx> > > > Is a problem related to run out of vmalloc space _only_ or it is a problem > with broken list? From the commit message it is hard to follow the reason. The broken list caused the vmalloc space run out. I think we should fix the broken list. Wondering if the issue can be always reproduced, or rarely seen. We should try making a patch to fix the list breakage unless it's not feasible. I will have a look at this. > > Could you please post a full trace or panic? > > > --- > > v2: introduce cpu in vmap_block to record the right CPU number > > v3: use get_cpu/put_cpu to prevent schedule between core > > --- > > --- > > mm/vmalloc.c | 12 ++++++++---- > > 1 file changed, 8 insertions(+), 4 deletions(-) > > > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > > index 22aa63f4ef63..ecdb75d10949 100644 > > --- a/mm/vmalloc.c > > +++ b/mm/vmalloc.c > > @@ -2458,6 +2458,7 @@ struct vmap_block { > > struct list_head free_list; > > struct rcu_head rcu_head; > > struct list_head purge; > > + unsigned int cpu; > > }; > > > > /* Queue of free and dirty vmap blocks, for allocation and flushing purposes */ > > @@ -2586,10 +2587,12 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask) > > return ERR_PTR(err); > > } > > > > + vb->cpu = get_cpu(); > > vbq = raw_cpu_ptr(&vmap_block_queue); > > spin_lock(&vbq->lock); > > list_add_tail_rcu(&vb->free_list, &vbq->free); > > spin_unlock(&vbq->lock); > > + put_cpu(); > > > Why do you need get_cpu() here? Can you go with raw_smp_processor_id() > and then access the per-cpu "vmap_block_queue"? get_cpu() disables > preemption and then a spin-lock is take within this critical section. > From the first glance PREEMPT_RT is broken in this case. > > I am on a vacation, responds can be with delays. > > -- > Uladzislau Rezki >