On 4/21/21 8:15 PM, Ming Lei wrote: > On Tue, Apr 20, 2021 at 05:02:33PM -0700, Bart Van Assche wrote: >> +static bool bt_tags_iter(struct sbitmap *bitmap, unsigned int bitnr, void *data) >> +{ >> + struct bt_tags_iter_data *iter_data = data; >> + struct blk_mq_tags *tags = iter_data->tags; >> + bool res; >> + >> + if (iter_data->flags & BT_TAG_ITER_MAY_SLEEP) { >> + down_read(&tags->iter_rwsem); >> + res = __bt_tags_iter(bitmap, bitnr, data); >> + up_read(&tags->iter_rwsem); >> + } else { >> + rcu_read_lock(); >> + res = __bt_tags_iter(bitmap, bitnr, data); >> + rcu_read_unlock(); >> + } >> + >> + return res; >> +} > > Holding one rwsem or rcu read lock won't avoid the issue completely > because request may be completed remotely in iter_data->fn(), such as > nbd_clear_req(), nvme_cancel_request(), complete_all_cmds_iter(), > mtip_no_dev_cleanup(), because blk_mq_complete_request() may complete > request in softirq, remote IPI, even wq, and the request is still > referenced in these contexts after bt_tags_iter() returns. The rwsem and RCU read lock are used to serialize iterating over requests against blk_mq_sched_free_requests() calls. I don't think it matters for this patch from which context requests are freed. Bart.