On 12/1/2018 10:32 AM, Bart Van Assche wrote:
On 12/1/18 9:11 AM, Hannes Reinecke wrote:
Yes, I'm very much in favour of this, too.
We always have this IMO slightly weird notion of stopping the queue,
set some error flags in the driver, then _restarting_ the queue, just
so that the driver then sees the error flag and terminates the requests.
Which I always found quite counter-intuitive.
So having a common helper for terminating requests for queue errors
would be very welcomed here.
But when we have that we really should audit all drivers to ensure
they do the right thin (tm).
Would calling blk_abort_request() for all outstanding requests be
sufficient to avoid that the queue has to be stopped and restarted in
the nvme-fc driver?
what nvme-fc does is the same as what is done in all the other
transports - for the same reasons. If we're eliminating those
synchronization reasons, and now that we've plugged the request_queue
path into the transports to check state appropriately, I don' t think
there are reasons to block the queue. In some respects, it is nice to
stop new io while the work to terminate everything else happens, but I
don't know that it's required. I would hope that the bounced work due
to the controller state (returned BLK_STAT_RESOURCE) is actually pausing
for a short while. I've seen some circumstances where it didn't and was
infinitely polling. Which would be a change in behavior vs the queue stops.
-- james