Re: [PATCH v3 05/17] scsi_transport_fc: Added a new rport state FC_PORTSTATE_MARGINAL

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/19/20 8:55 PM, Mike Christie wrote:
On 10/19/20 12:31 PM, Muneendra Kumar M wrote:
Hi Michael,




Oh yeah, to be clear I meant why try to send it on the marginal path
when you are setting up the path groups so they are not used and only the
optimal paths are used.
When the driver/scsi layer fails the IO then the multipath layer will
make sure it goes on a optimal path right so you do not have to worry
about hitting a cmd timeout and firing off the scsi eh.

However, one other question I had though, is are you setting up
multipathd so the marginal paths are used if the optimal ones were to
fail (like the optimal paths hit a link down, dev_loss_tmo or
fast_io_fail fires, etc) or will they be treated like failed paths?

So could you end up with 3 groups:

1. Active optimal paths
2. Marginal
3. failed

If the paths in 1 move to 3, then does multipathd handle it like a all
paths down or does multipathd switch to #2?

Actually, marginal path work similar to the ALUA non-optimized state.
Yes, the system can sent I/O to it, but it'd be preferable for the I/O to
be moved somewhere else.
If there is no other path (or no better path), yeah, tough.

Hence the answer would be 2)


[Muneendra]As Hannes mentioned if there are no active paths, the marginal
paths will be moved to normal and the system will send the io.
What do you mean by normal?

- You don't mean the the fc remote port state will change to online right?

- Do you just mean the the marginal path group will become the active group in the dm-multipath layer?

Actually, the latter is what I had in mind.

The paths should stay in 'marginal' until some admin interaction did take place. That would be either a link reset (ie the fabric has been rescanned due to an RSCN event), or an admin resetting the state to 'normal' manually.
The daemons should never be moving the port out of 'marginal'.

So it really just influences the path grouping in multipathd, and multipath should switch to the marginal path group if all running paths are gone.

Cheers,

Hannes
--
Dr. Hannes Reinecke                Kernel Storage Architect
hare@xxxxxxx                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux