On 05/24/23 at 11:52am, Thomas Gleixner wrote: > On Wed, May 24 2023 at 17:32, Baoquan He wrote: > > On 05/23/23 at 04:02pm, Thomas Gleixner wrote: > >> @@ -2236,9 +2236,10 @@ static void _vm_unmap_aliases(unsigned l > >> for_each_possible_cpu(cpu) { > >> struct vmap_block_queue *vbq = &per_cpu(vmap_block_queue, cpu); > >> struct vmap_block *vb; > >> + unsigned long idx; > >> > >> rcu_read_lock(); > > > > Do we need to remove this rcu_read_xx() pair since it marks the RCU > > read-side critical section on vbq-free list? > > And what protects the xarray lookup? We have put rcu_read_lock() pair around the xas_find(). And it will find into xarray for each iteration item. We won't lose the connection to the next element like list adding or deleting? not very sure, I could be wrong. xa_for_each() -->xa_for_each_start() -->xa_for_each_range() -->xa_find() void *xa_find(struct xarray *xa, unsigned long *indexp, unsigned long max, xa_mark_t filter) { ...... rcu_read_lock(); do { if ((__force unsigned int)filter < XA_MAX_MARKS) entry = xas_find_marked(&xas, max, filter); else entry = xas_find(&xas, max); } while (xas_retry(&xas, entry)); rcu_read_unlock(); if (entry) *indexp = xas.xa_index; return entry; } > > >> - list_for_each_entry_rcu(vb, &vbq->free, free_list) { > >> + xa_for_each(&vbq->vmap_blocks, idx, vb) { > >> spin_lock(&vb->lock); > >> if (vb->dirty && vb->dirty != VMAP_BBMAP_BITS) { > >> unsigned long va_start = vb->va->va_start; > >> >