On 7/27/20 12:36 PM, Sagi Grimberg wrote: > >>>>> +void blk_mq_quiesce_queue_async(struct request_queue *q) >>>>> +{ >>>>> + struct blk_mq_hw_ctx *hctx; >>>>> + unsigned int i; >>>>> + >>>>> + blk_mq_quiesce_queue_nowait(q); >>>>> + >>>>> + queue_for_each_hw_ctx(q, hctx, i) { >>>>> + init_completion(&hctx->rcu_sync.completion); >>>>> + init_rcu_head(&hctx->rcu_sync.head); >>>>> + if (hctx->flags & BLK_MQ_F_BLOCKING) >>>>> + call_srcu(hctx->srcu, &hctx->rcu_sync.head, >>>>> + wakeme_after_rcu); >>>>> + else >>>>> + call_rcu(&hctx->rcu_sync.head, >>>>> + wakeme_after_rcu); >>>>> + } >>>> >>>> Looks not necessary to do anything in case of !BLK_MQ_F_BLOCKING, and single >>>> synchronize_rcu() is OK for all hctx during waiting. >>> >>> That's true, but I want a single interface for both. v2 had exactly >>> that, but I decided that this approach is better. >> >> Not sure one new interface is needed, and one simple way is to: >> >> 1) call blk_mq_quiesce_queue_nowait() for each request queue >> >> 2) wait in driver specific way >> >> Or just wondering why nvme doesn't use set->tag_list to retrieve NS, >> then you may add per-tagset APIs for the waiting. > > Because it puts assumptions on how quiesce works, which is something > I'd like to avoid because I think its cleaner, what do others think? > Jens? Christoph? I'd prefer to have it in a helper, and just have blk_mq_quiesce_queue() call that. -- Jens Axboe