> > Ming - > > > > I noted your comments. > > > > I have completed testing and this particular latest performance issue > > on Volume is outstanding. > > Currently it is 20-25% performance drop in IOPs and we want that to be > > closed before shared host tag is enabled for <megaraid_sas> driver. > > Just for my understanding - What will be the next steps on this ? > > > > I can validate any new approach/patch for this issue. > > > > Hello, > > What do you think of the following patch? I tested this patch. I still see IO hang. > > diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index > c866a4f33871..49f0fc5c7a63 100644 > --- a/drivers/scsi/scsi_lib.c > +++ b/drivers/scsi/scsi_lib.c > @@ -552,8 +552,24 @@ static void scsi_run_queue_async(struct scsi_device > *sdev) > if (scsi_target(sdev)->single_lun || > !list_empty(&sdev->host->starved_list)) > kblockd_schedule_work(&sdev->requeue_work); > - else > - blk_mq_run_hw_queues(sdev->request_queue, true); > + else { > + /* > + * smp_mb() implied in either rq->end_io or > blk_mq_free_request > + * is for ordering writing .device_busy in scsi_device_unbusy() > + * and reading sdev->restarts. > + */ > + int old = atomic_read(&sdev->restarts); > + > + if (old) { > + blk_mq_run_hw_queues(sdev->request_queue, true); > + > + /* > + * ->restarts has to be kept as non-zero if there is > + * new budget contention comes. > + */ > + atomic_cmpxchg(&sdev->restarts, old, 0); > + } > + } > } > > /* Returns false when no more bytes to process, true if there are more */ > @@ -1612,8 +1628,34 @@ static void scsi_mq_put_budget(struct > request_queue *q) static bool scsi_mq_get_budget(struct request_queue *q) > { > struct scsi_device *sdev = q->queuedata; > + int ret = scsi_dev_queue_ready(q, sdev); > > - return scsi_dev_queue_ready(q, sdev); > + if (ret) > + return true; > + > + /* > + * If all in-flight requests originated from this LUN are completed > + * before setting .restarts, sdev->device_busy will be observed as > + * zero, then blk_mq_delay_run_hw_queue() will dispatch this request > + * soon. Otherwise, completion of one of these request will observe > + * the .restarts flag, and the request queue will be run for handling > + * this request, see scsi_end_request(). > + */ > + atomic_inc(&sdev->restarts); > + > + /* > + * Order writing .restarts and reading .device_busy, and make sure > + * .restarts is visible to scsi_end_request(). Its pair is implied by > + * __blk_mq_end_request() in scsi_end_request() for ordering > + * writing .device_busy in scsi_device_unbusy() and reading .restarts. > + * > + */ > + smp_mb__after_atomic(); > + > + if (unlikely(atomic_read(&sdev->device_busy) == 0 && > + !scsi_device_blocked(sdev))) > + blk_mq_delay_run_hw_queues(sdev->request_queue, > SCSI_QUEUE_DELAY); Hi Ming - There is still some race which is not handled. Take a case of IO is not able to get budget and it has already marked <restarts> flag. <restarts> flag will be seen non-zero in completion path and completion path will attempt h/w queue run. (But this particular IO is still not in s/w queue.). Attempt of running h/w queue from completion path will not flush any IO since there is no IO in s/w queue. I think above code is added assuming it should manage this particular case, but this code also does not help. If some IO in between submitted directly to the h/w queue then sdev->device_busy will be non-zero. If I move above section of the code into completion path, IO hang is resolved. I also verify performance - Multi Drive R0 1 workers per VD gives 662K prior to this patch and now It scale to 1.1M IOPs. (90% improvement) Multi Drive R0 4 workers per VD gives 1.9M prior to this patch and now It scale to 3.1M IOPs. (50% improvement) Here is modified patch - diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index 6f50e5c..dcdc5f6 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -594,8 +594,26 @@ static bool scsi_end_request(struct request *req, blk_status_t error, if (scsi_target(sdev)->single_lun || !list_empty(&sdev->host->starved_list)) kblockd_schedule_work(&sdev->requeue_work); - else - blk_mq_run_hw_queues(q, true); + else { + /* + * smp_mb() implied in either rq->end_io or blk_mq_free_request + * is for ordering writing .device_busy in scsi_device_unbusy() + * and reading sdev->restarts. + */ + int old = atomic_read(&sdev->restarts); + + if (old) { + blk_mq_run_hw_queues(sdev->request_queue, true); + + /* + * ->restarts has to be kept as non-zero if there is + * new budget contention comes. + */ + atomic_cmpxchg(&sdev->restarts, old, 0); + } else if (unlikely(atomic_read(&sdev->device_busy) == 0 && + !scsi_device_blocked(sdev))) + blk_mq_delay_run_hw_queues(sdev->request_queue, SCSI_QUEUE_DELAY); + } percpu_ref_put(&q->q_usage_counter); return false; @@ -1615,8 +1633,31 @@ static bool scsi_mq_get_budget(struct blk_mq_hw_ctx *hctx) { struct request_queue *q = hctx->queue; struct scsi_device *sdev = q->queuedata; + int ret = scsi_dev_queue_ready(q, sdev); + + if (ret) + return true; - return scsi_dev_queue_ready(q, sdev); + /* + * If all in-flight requests originated from this LUN are completed + * before setting .restarts, sdev->device_busy will be observed as + * zero, then blk_mq_delay_run_hw_queue() will dispatch this request + * soon. Otherwise, completion of one of these request will observe + * the .restarts flag, and the request queue will be run for handling + * this request, see scsi_end_request(). + */ + atomic_inc(&sdev->restarts); + + /* + * Order writing .restarts and reading .device_busy, and make sure + * .restarts is visible to scsi_end_request(). Its pair is implied by + * __blk_mq_end_request() in scsi_end_request() for ordering + * writing .device_busy in scsi_device_unbusy() and reading .restarts. + * + */ + smp_mb__after_atomic(); + + return false; } static blk_status_t scsi_queue_rq(struct blk_mq_hw_ctx *hctx, diff --git a/include/scsi/scsi_device.h b/include/scsi/scsi_device.h index bc59090..ac45058 100644 --- a/include/scsi/scsi_device.h +++ b/include/scsi/scsi_device.h @@ -108,7 +108,8 @@ struct scsi_device { atomic_t device_busy; /* commands actually active on LLDD */ atomic_t device_blocked; /* Device returned QUEUE_FULL. */ - + + atomic_t restarts; spinlock_t list_lock; struct list_head starved_entry; unsigned short queue_depth; /* How deep of a queue we want */