Re: REQ_HIPRI and SCSI mid-level

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, May 27, 2021 at 05:43:07PM +0200, Hannes Reinecke wrote:
> On 5/25/21 6:03 PM, Douglas Gilbert wrote:
> > On 2021-05-21 5:56 p.m., Douglas Gilbert wrote:
> > > The REQ_HIPRI flag on requests is associated with blk_poll() (aka iopoll)
> > > and assumes the user space (or some higher level) will be calling
> > > blk_poll() on requests marked with REQ_HIPRI and that will lead to their
> > > completion.
> > > 
> > > In lk 5.13-rc1 the megaraid and scsi_debug LLDs support blk_poll() [seen
> > > by searching for 'mq_poll'] with more to follow, I assume. I have tested
> > > blk_poll() on the scsi_debug driver using both fio and the new sg driver.
> > > It works well with one caveat: as long as there isn't an error.
> > > After fighting with that error processing from the ULD side (i.e. the
> > > new sg driver) and the LLD side I am concluding that the glue that
> > > holds them together, that is, the mid-level is not as REQ_HIPRI aware
> > > as it should be.
> > > 
> > > Yes REQ_HIPRI is there in scsi_lib.c but it is missing from scsi_error.c
> > > How can scsi_error.c re-issue requests _without_ taking into account
> > > that the original was issued with REQ_HIPRI ? Well I don't know but I'm
> > > pretty sure that is close to the area that I see causing problems
> > > (mainly lockups).
> > > 
> > > As an example the scsi_debug driver has an in-use bitmap that when a new
> > > request arrives the code looks for an empty slot. Due to (incorrect)
> > > parameter setup that may fail. If the driver returns:
> > >      device_qfull_result = (DID_OK << 16) | SAM_STAT_TASK_SET_FULL;
> > > then I see lock-ups if the request in question has REQ_HIPRI set.
> > > 
> > > If that is changed to:
> > >      device_qfull_result = (DID_ABORT << 16) | SAM_STAT_TASK_SET_FULL;
> > > then my user space test program sees that error and aborts showing the
> > > TASK SET FULL SCSI status. That is much better than a lockup ...
> > > 
> That's because with the first result the command is requeued (due to
> DID_OK), whereas in the latter result the command is aborted (due to
> DID_ABORT).
> 
> So the question really is whether we should retry the commands which have
> REQ_HIPRI set, or whether we shouldn't rather complete them with appropriate
> error code.
> A bit like enhanced BLOCK_PC requests, if you will.
> 
> > > Having played around with variants of the above for a few weeks, I'd
> > > like to throw this problem into the open :-)
> > > 
> > > 
> > > Suggestion: perhaps the eh could give up immediately on any request
> > > with REQ_HIPRI set (i.e. make it a higher level layer's problem).
> 
> Like I said above: it's not only scsi EH which would need to be modified,
> but possibly also the result evaluation in scsi_decide_disposition(); it's
> questionable whether a HIPRI command should be requeued at all.

Why can't HIPRI req be requeued?

> 
> But this might even affect the NVMe folks; they do return commands with
> BLK_STS_RESOURCE, too.

Block layer will be responsible for re-queueing BLK_STS_RESOURCE requests,
so still not understand why it is one issue for HIPRI req. Also
rq->mq_hctx won't be changed since its allocation, blk_poll()
will keep polling on the correct hw queue for reaping the IO.


Thanks, 
Ming




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux