boot scripts and dm-multipath

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



SuSE and Red Hat (maybe others also) enterprise distributions currently
invoke multipath fairly early in the boot sequence, followed sometime
later by multipathd.  I think that these distributions run the risk of
incurring a boot-time hang when read I/Os issued (by kpartx in
/etc/boot.multipath for instance) to dm-multipath mapped devices before
multipathd is started, fail (due to failures which are not SCSI
transport related) when sent to storage targets configured with the
dm-multipath queue_if_no_path.  Without multipathd running there is no
ability to timeout the "queue I/O forever" behavior during an
all-paths-down use case.

In these cases, the dm-multipath device is created because the storage
target responds successfully to the device id (VPD page 0x83) inquiry
request for each path but all path tests and read/write I/O requests
issued on any/all paths to the block device will fail.  Currently, the
kernel resident dm-multipath code will queue the failed read/write I/O
indefinitely if the queue_if_no_path attribute is configured for the
mapped device since this code is currently unable to differentiate
between SCSI transport related failures and other errors.

Newer versions of multipathd (that is, ones based on
multipath-tools-0.4.7) do not need to invoke multipath in order to
configure the dm-multipath mapped devices, simply invoking multipathd
suffices.  Is it reasonable to change these scripts to invoke multipathd
instead of multipath at early boot and not invoke multipath at all from
these scripts?

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux