On 2021-02-09 09:27, Daejun Park wrote:
@@ -342,13 +1208,14 @@ void ufshpb_suspend(struct ufs_hba *hba)
> struct scsi_device *sdev;
>
> shost_for_each_device(sdev, hba->host) {
> - hpb = sdev->hostdata;
> + hpb = ufshpb_get_hpb_data(sdev);
> if (!hpb)
> continue;
>
> if (ufshpb_get_state(hpb) != HPB_PRESENT)
> continue;
> ufshpb_set_state(hpb, HPB_SUSPEND);
> + ufshpb_cancel_jobs(hpb);
Here may have a dead lock problem - in the case of runtime
suspend,
when ufshpb_suspend() is invoked, all of hba's children scsi
devices
are in RPM_SUSPENDED state. When this line tries to cancel a
running
map work, i.e. when ufshpb_get_map_req() calls below lines, it
will
be stuck at blk_queue_enter().
req = blk_get_request(hpb->sdev_ufs_lu->request_queue,
REQ_OP_SCSI_IN, 0);
Please check block layer power management, and see also commit
d55d15a33
("scsi: block: Do not accept any requests while suspended").
I am agree with your comment.
How about add BLK_MQ_REQ_NOWAIT flag on blk_get_request() to avoid
hang?
That won't work - BLK_MQ_REQ_NOWAIT allows one to fast fail from
blk_mq_get_tag(),
but blk_queue_enter() comes before __blk_mq_alloc_request();
In blk_queue_enter(), BLK_MQ_REQ_NOWAIT flag can make error than wait
rpm
resume. Please refer following code.
Oops, sorry, my memory needs to be refreshed on that part.
But will BLK_MQ_REQ_NOWAIT flag breaks your original purpose? When
runtime suspend is out of the picture, if traffic is heavy on the
request queue, map_work() will be stopped frequently once it is
not able to get a request from the queue - that shall pull down the
efficiency of one map_work(), that may hurt random performance...
I think deadlock prevention is the most important. So I want to add
BLK_MQ_REQ_NOWAIT flag.
Starvation of map request can be distinguish by return value of
blk_get_request(). -EWOULDBLOCK means there is no available tags for
this
request. -EBUSY means failed on blk_queue_enter(). To overcome
starvation
of map request, we can try N times in heavy traffic situation (maybe
N=3?).
LGTM. You make the call.
Regards,
Can Guo.
Thanks,
Daejun