Re: [PATCH v3 05/17] scsi_transport_fc: Added a new rport state FC_PORTSTATE_MARGINAL

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> On Oct 19, 2020, at 11:10 AM, Michael Christie <michael.christie@xxxxxxxxxx> wrote:
> 
> 
> So it’s not clear to me if you know the path is not optimal and might hit
> a timeout, and you are not going to use it once the existing IO completes why
> even try to send it? I mean in this setup, new commands that are entering
> the dm-multipath layer will not be sent to these marginal paths right?


Oh yeah, to be clear I meant why try to send it on the marginal path when you are
setting up the path groups so they are not used and only the optimal paths are used.
When the driver/scsi layer fails the IO then the multipath layer will make sure it
goes on a optimal path right so you do not have to worry about hitting a cmd timeout
and firing off the scsi eh.

However, one other question I had though, is are you setting up multipathd so the
marginal paths are used if the optimal ones were to fail (like the optimal paths hit a
link down, dev_loss_tmo or fast_io_fail fires, etc) or will they be treated
like failed paths?

So could you end up with 3 groups:

1. Active optimal paths
2. Marginal
3. failed

If the paths in 1 move to 3, then does multipathd handle it like a all paths down
or does multipathd switch to #2?





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux