On 5/6/21 5:11 PM, Ming Lei wrote:
On Thu, May 06, 2021 at 07:51:53AM -0700, Bart Van Assche wrote:
I'm not sure that would be a better name since I don't think that it is
necessary to hold that lock around the cmpxchg() calls. How about using
something like the following code in blk_mq_clear_rq_mapping() instead
of the code in v5 of patch 3/4?
spin_lock_irqsave(&drv_tags->lock, flags);
spin_unlock_irqrestore(&drv_tags->lock, flags);
list_for_each_entry(page, &tags->page_list, lru) {
/* use cmpxchg() to clear request pointer selectively */
}
This way won't work as expected because iterating may happen between
releasing drv_tags->lock and cmpxchg(->rqs[]), then freed requests
may still be touched during iteration after they are freed in blk_mq_free_rqs().
Right, the unlock should happen after the pointers have been cleared.
But I think it is safe to move the spin_lock call down such that both
the spin_lock and spin_unlock call happen after the pointers have been
cleared. That is sufficient to guarantee that all
blk_mq_find_and_get_req() calls either happen before or after the spin
lock / unlock pair. blk_mq_find_and_get_req() calls that happen before
the pair happen before the request pointers are freed. Calls that happen
after the spin lock / unlock pair will either read NULL or a pointer to
a request that is associated with another queue and hence won't trigger
a use-after-free.
Bart.