@@ -341,8 +341,11 @@ void blk_mq_tagset_busy_iter(struct
blk_mq_tag_set *tagset,
int i;
for (i = 0; i < tagset->nr_hw_queues; i++) {
- if (tagset->tags && tagset->tags[i])
+ if (tagset->tags && tagset->tags[i]) {
blk_mq_all_tag_busy_iter(tagset->tags[i], fn, priv);
As I mentioned earlier, wouldn't this iterate over all tags for all
hctx's, when we just want the tags for hctx[i]?
Thanks,
John
[Not trimming reply for future reference]
+ if (tagset->share_tags)
+ break;
+ }
}
}
EXPORT_SYMBOL(blk_mq_tagset_busy_iter);
Since blk_mq_tagset_busy_iter() loops over all hardware queues all
what is changed is the order in which requests are examined. I am not
aware of any block driver that calls blk_mq_tagset_busy_iter() and
that depends on the order of the requests passed to the callback
function.
OK, fine.
So, to me, this approach also seems viable then.
I am however not so happy with how we use blk_mq_tag_set.tags[0] for the
shared tags; I would like to use blk_mq_tag_set.shared_tags and make
blk_mq_tag_set.tags[] point at blk_mq_tag_set.shared_tags or maybe not
blk_mq_tag_set.tags[] at all. However maybe that change may be more
intrusive.
And another more real concern is that we miss a check somewhere for
rq->mq_hctx == hctx when examining the bits on the shared tags.
Another issue I notice is that using tags from just hctx0 may cause a
breakage when associated with a different hw queue in the driver.
Specifically we may have blk_mq_alloc_rqs(hctx_idx =
0)->blk_mq_init_request()->nvme_init_request(), and we would set all
iod->nvmeq = nvmeq0; since we may actually use this iod on another hw
queue, we're breaking that interface.
Thanks,
John