On 10/19/20 6:19 PM, Michael Christie wrote:
On Oct 19, 2020, at 11:10 AM, Michael Christie <michael.christie@xxxxxxxxxx> wrote:
So it’s not clear to me if you know the path is not optimal and might hit
a timeout, and you are not going to use it once the existing IO completes why
even try to send it? I mean in this setup, new commands that are entering
the dm-multipath layer will not be sent to these marginal paths right?
Oh yeah, to be clear I meant why try to send it on the marginal path when you are
setting up the path groups so they are not used and only the optimal paths are used.
When the driver/scsi layer fails the IO then the multipath layer will make sure it
goes on a optimal path right so you do not have to worry about hitting a cmd timeout
and firing off the scsi eh.
However, one other question I had though, is are you setting up multipathd so the
marginal paths are used if the optimal ones were to fail (like the optimal paths hit a
link down, dev_loss_tmo or fast_io_fail fires, etc) or will they be treated
like failed paths?
So could you end up with 3 groups:
1. Active optimal paths
2. Marginal
3. failed
If the paths in 1 move to 3, then does multipathd handle it like a all paths down
or does multipathd switch to #2?
Actually, marginal path work similar to the ALUA non-optimized state.
Yes, the system can sent I/O to it, but it'd be preferable for the I/O
to be moved somewhere else.
If there is no other path (or no better path), yeah, tough.
Hence the answer would be 2)
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@xxxxxxx +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer