On Wed, Mar 02, 2022 at 12:24:34AM -0800, Christoph Hellwig wrote: > On Wed, Mar 02, 2022 at 10:06:04AM +0800, Ming Lei wrote: > > I did considered xa_for_each(), but it requires rcu read lock. > > No, I doesn't. It just takes a RCU lock internally. OK. > > > Also queue_for_each_hw_ctx() is supposed to not run in fast path, > > meantime xa_load() is lightweight enough too, so repeated xa_load() > > is fine here. > > I'd rather have the clarity of the proper iterators. Another point is that 'unsigned long *' is passed to xa_find() as index. However, almost all users of queue_for_each_hw_ctx() defines hctx index as 'unsigned int'. If we switch to xa_for_each(), the type of hctx index needs to be changed to 'unsigned long' for every queue_for_each_hw_ctx(). But xa_load() needn't such change. Also from user viewpoint, looks 'unsigned long' change for hctx index is still a bit confusing, IMO. Thanks, Ming