[dm-devel] How should path group failback work in a multi-node cluster?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Should automatic path group failback be disabled on all cluster
configurations?
This is certainly what I have assumed in order to avoid cases where the
nodes
of a cluster are battling one another trying to activate different path
groups for
the same block device.

Yet, this approach has the disadvantage of potentially resulting in an
unbalanced
assignment of block devices to path groups over time, that is, too many
block
devices could be active on the same path group.  Most asymmetric storage
systems are not meant to reserve a path group as a hot standby for all
logical
units, but instead statically balance the assignment of logical units to
path groups
in order to balance the overall IO load.

Another approach is to only initiate automatic failback from a host if the
initial
failover was initiated by that host.  Otherwise, assuming the currently
active path
group contains active paths, simply accept the externally initiated path
group
re-assignment away from the highest priority path group and start using
paths
from the active path group.

This approach certainly requires the tracking of path group
assignment/activations
away from the highest priority path group initiated by a host on a block
device
basis.  Seems like the multipath target driver in the kernel is the best
place for this.
Multipathd could pull out this information and use it to further
conditionalize the
autoamtic path group failback.


[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux