Re: [PATCH 04/12] IB/srp: Fix connection state tracking

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 2015-05-05 at 16:26 +0200, Bart Van Assche wrote:
> On 05/05/15 16:10, Doug Ledford wrote:
> > However, while looking through the driver to research this, I noticed
> > something else that seems more important if you ask me.  With this patch
> > we now implement individual channel connection tracking.  However, in
> > srp_queuecommand() you pick the channel based on the tag, and the blk
> > layer has no idea of these disconnects, so the blk layer is free to
> > assign a tag/channel to a channel that's disconnected, and then as best
> > I can tell, you will simply try to post a work request to a channel
> > that's already disconnected, which I would expect to fail if we have
> > already disconnected this particular qp and not brought up a new one
> > yet.  So it seems to me there is a race condition between new incoming
> > SCSI commands and this disconnect/reconnect window, and that maybe we
> > should be sending these commands back to the mid layer for requeueing
> > when the channel the blk_mq tag points to is disconnected.  Or am I
> > missing something in there?
> 
> Hello Doug,
> 
> Around the time a cable disconnect or other link layer failure is 
> detected by the SRP initiator or any other SCSI LLD it is unavoidable 
> that one or more SCSI requests fail. It is up to a higher layer (e.g. 
> dm-multipath + multipathd) to decide what to do with such requests, e.g. 
> queue these requests and resend these over another path.

Sure, but that wasn't my point.  My point was that if you know the
channel is disconnected, then why don't you go immediately to the
correct action in queuecommand (where correct action could be requeue
waiting on reconnect or return with error, whatever is appropriate)?
Instead you attempt to post a command to a known disconnected queue
pair.

>  The SRP 
> initiator driver has been tested thoroughly with the multipath 
> queue_if_no_path policy, with a fio job with I/O verification enabled 
> running on top of a dm device while concurrently repeatedly simulating 
> link layer failures (via ibportstate).

Part of my questions here are because I don't know how the blk_mq
handles certain conditions.  However, your testing above only handles
one case: all channels get dropped.  As unlikely it may be, what if
resource constraints caused just one channel to be dropped out of the
bunch and the others stayed alive?  Then the blk_mq would see requests
on just one queue come back errored, while the others finished
successfully.  Does it drop that one queue out of rotation, or does it
fail over the entire connection?

-- 
Doug Ledford <dledford@xxxxxxxxxx>
              GPG KeyID: 0E572FDD


Attachment: signature.asc
Description: This is a digitally signed message part


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]
  Powered by Linux