> > > > > > Another update is that V4 of 'scsi: core: only re-run queue in > > > scsi_end_request() if device queue is busy' is quite hard to > > > implement > > since > > > commit b4fd63f42647110c9 ("Revert "scsi: core: run queue if SCSI > > > device queue isn't ready and queue is idle"). > > > > Ming - > > > > Update from my testing. I found only one case of IO stall. I can > > discuss this specific topic if you like to send separate patch. It is > > too much interleaved discussion in this thread. > > > > I noted you mentioned that V4 of 'scsi: core: only re-run queue in > > scsi_end_request() if device queue is busy' need underlying support of > > "scsi: core: run queue if SCSI device queue isn't ready and queue is idle" > > patch which is already reverted in mainline. > > Right. > > > Overall idea of running h/w queues conditionally in your patch " scsi: > > core: only re-run queue in scsi_end_request" is still worth. There can > > be > > I agree. > > > some race if we use this patch and for that you have concern. Am I > > correct. ? > > If the patch of "scsi: core: run queue if SCSI device queue isn't ready and queue > is idle" is re-added, the approach should work. I could not find issue in " scsi: core: only re-run queue in scsi_end_request" even though above mentioned patch is reverted. There may be some corner cases/race condition in submission path which can be fixed doing self-restart of h/w queue. > However, it looks a bit > complicated, and I was thinking if one simpler approach can be figured out. I was thinking your original approach is simple, but if you think some other simple approach I can test as part of these series. BTW, I am still not getting why you think your original approach is not good design. > > > > > One of the race I found in my testing is fixed by below patch - > > > > diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index > > 54f9015..bcfd33a 100644 > > --- a/block/blk-mq-sched.c > > +++ b/block/blk-mq-sched.c > > @@ -173,8 +173,10 @@ static int blk_mq_do_dispatch_ctx(struct > > blk_mq_hw_ctx *hctx) > > if (!sbitmap_any_bit_set(&hctx->ctx_map)) > > break; > > > > - if (!blk_mq_get_dispatch_budget(hctx)) > > + if (!blk_mq_get_dispatch_budget(hctx)) { > > + blk_mq_delay_run_hw_queue(hctx, > > BLK_MQ_BUDGET_DELAY); > > break; > > + } > > Actually all hw queues need to be run, instead of this hctx, cause the budget > stuff is request queue wide. OK. But I thought all the hctx will see issue independently, if they are active and they will restart its own hctx queue. BTW, do you think above handling in block layer code make sense irrespective of current h/w queue restart logic OR it is just relative stuffs ? > > > > > rq = blk_mq_dequeue_from_ctx(hctx, ctx); > > if (!rq) { > > > > > > In my test setup, I have your V3 'scsi: core: only re-run queue in > > scsi_end_request() if device queue is busy' rebased on 5.8 which does > > not have > > "scsi: core: run queue if SCSI device queue isn't ready and queue is idle" > > since it is already reverted in mainline. > > If you added the above patch, I believe you can remove the run queue in > scsi_end_request() unconditionally. However, the delay run queue may > degrade io performance. I understood. But that performance issue is due to budget contention and may impact some old HBA(less queue depth) or emulation HBA. That is why I thought your patch of conditional h/w run from completion would improve performance. > > Actually the re-run queue in scsi_end_request() is only for dequeuing request > from sw/scheduler queue. And no such issue if request stays in > hctx->dispatch list. I was not aware of this. Thanks for info. I will review the code for my own understanding. > > > > > I have 24 SAS SSD and I reduced QD = 16 so that I hit budget > > contention frequently. I am running ioscheduler=none. > > If hctx0 has 16 IO inflight (All those IO will be posted to h/q queue > > directly). Next IO (17th IO) will see budget contention and it will be > > queued into s/w queue. > > It is expected that queue will be kicked from scsi_end_request. It is > > possible that one of the IO completed and it reduce > > sdev->device_busy, but it has not yet reach the code which will kicked the > h/w queue. > > Releasing budget and restarting h/w queue is not atomic. At the same > > time, another IO (18th IO) from submission path get the budget and it > > will be posted from below path. This IO will reset sdev->restart and > > it will not allow h/w queue to be restarted from completion path. This > > will lead one > > Maybe re-run queue is needed before resetting sdev->restart if sdev->restart > is 1. Agree. > > > Thanks, > Ming