boot scripts and dm-multipath

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Resending this since I think I got no feedback last time.

Linux distributions currently invoke /sbin/multipath fairly early in the
boot sequence, followed sometime later by /sbin/multipathd.  I think
that these distributions run the risk of incurring a boot-time hang
when/if read I/Os issued (by kpartx in /etc/boot.multipath for instance)
to dm-multipath devices before multipathd is started, fail (due to
failures which are not SCSI transport related) when sent to storage
targets configured with the dm-multipath queue_if_no_path feature.
Without multipathd running there is no ability to timeout the "queue I/O
forever" behavior during an all-paths-down use case.

In these cases, the dm-multipath device is created because the storage
target responds successfully to the device id inquiry request for each
path but all path tests and read/write I/O requests issued on all paths
to the device will fail.  If the dm-multipath device was configured with
the queue_if_no_path feature, the kernel dm-multipath code will queue
the failed read/write I/O indefinitely.

Newer versions of multipathd (that is, ones based on
multipath-tools-0.4.7) do not need to invoke multipath in order to
configure the dm-multipath, simply invoking multipathd suffices.  Is it
reasonable to change these scripts to invoke multipathd instead of
multipath at early boot and not invoke multipath at all from these
scripts?

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux