On Wed, May 24 2023 at 22:10, Baoquan He wrote: > On 05/24/23 at 11:52am, Thomas Gleixner wrote: >> On Wed, May 24 2023 at 17:32, Baoquan He wrote: >> > On 05/23/23 at 04:02pm, Thomas Gleixner wrote: >> >> @@ -2236,9 +2236,10 @@ static void _vm_unmap_aliases(unsigned l >> >> for_each_possible_cpu(cpu) { >> >> struct vmap_block_queue *vbq = &per_cpu(vmap_block_queue, cpu); >> >> struct vmap_block *vb; >> >> + unsigned long idx; >> >> >> >> rcu_read_lock(); >> > >> > Do we need to remove this rcu_read_xx() pair since it marks the RCU >> > read-side critical section on vbq-free list? >> >> And what protects the xarray lookup? > > We have put rcu_read_lock() pair around the xas_find(). And it will find > into xarray for each iteration item. We won't lose the connection to the > next element like list adding or deleting? not very sure, I could be > wrong. > > xa_for_each() > -->xa_for_each_start() > -->xa_for_each_range() > -->xa_find() I know how xarray works. No need to copy the code. rcu_read_lock() inside xa_find() protects the search, but it does not protect the returned entry, which might go away right after xa_find() does rcu_read_unlock(). Thanks, tglx