dm-multipath fails when one path is taken offline.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I seem to have multipath setup correctly:

# multipath -ll
vrp (360060e8010053b90052fb06900000190) dm-8 HITACHI,DF600F
size=70G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 6:0:0:0 sdc 8:32 active ready running
`-+- policy='round-robin 0' prio=0 status=enabled
 `- 5:0:0:0 sdb 8:16 active ready running

When I fail the /dev/sdb path, things work fine.  I can fail and restore the path all day.  When I fail the /dev/sdc path, the kernel marks the filesystem as read-only.

Background:
Running v0.4.9 of device-mapper-multipath
Hitachi AMS2500 array, using ports 0F and 1F
Brocade 5300 switches, split into two fabrics.  (fabric A, port 30,
fabric B port 30)
Host has a qlogic 2462 card with two ports in use.
device-mapper names the devices, LVM2 PV created from that, made into a VG, and then an LV (see below)
Failures produced by disabling the fabric port at the switch level (or physically disconnecting the fiber).

Current multipath.conf:

blacklist {
       devnode "^sda$"
}

defaults {
               checker_timeout         5
               polling_interval        5
}

multipaths {
       multipath {
               wwid 360060e8010053b90052fb06900000190
               alias                   vrp
#               path_selector           "round-robin 0"
       }
}

results of 'multipath -ll'

# multipath -ll
vrp (360060e8010053b90052fb06900000190) dm-8 HITACHI,DF600F
size=70G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 6:0:0:0 sdc 8:32 active ready running
`-+- policy='round-robin 0' prio=0 status=enabled
 `- 5:0:0:0 sdb 8:16 active ready running

I've also worked with several revisions of the multipath.conf file.  If I
remember correctly, with some device stanza revisions, I've had multipath -ll
returning this result instead:

# multipath -ll
vrp (360060e8010053b90052fb06900000190) dm-8 HITACHI,DF600F
size=70G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 6:0:0:0 sdc 8:32 active ready running
`- 5:0:0:0 sdb 8:16 active ready running

However, both are affected by the same problem.

Excerpt from /etc/fstab:
LABEL=vrp-db            /vrp-db                 ext3    defaults        0 2

relevant line from pvscan:
 PV /dev/mpath/vrp   VG vrpdg     lvm2 [70.00 GB / 0    free]

relevant line from vgscan:
 Found volume group "vrpdg" using metadata type lvm2

While I'd prefer an 'active-active' setup, I'd accept an active/passive
setup, provided it failed over correctly... preferably with a fast failback.

I'm more than happy to provide any other information.

--Jason
--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux