LVM and Multipath with EMC PowerPath (Was: CLVMD - Do I need it)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello all,

After reading a thread on this list (CLVMD - Do I need it), I started playing around with CLVM, just to make sure two problems I had in the past were solved:

1) LVM normally cannot be used on shared disks, because the first server that "sees" the PVs will initialize them, and the other server will see the LVM objects as inactive. This is solved in LVM2 when used together with CLVM, right? I'm not pretty sure about the mecanics of CLVM, but I imagine it shares device UUIDs between the machines. So far, so good.

2) The other problem is not directly related to CLVM, but I found no solution for it (yet). In my setup, I have multiple paths to the same devices in the shared storage (either in a SAN or DAS). Under the EMC solution, we employ PowerPath to solve the multiple devices issue for each LUN. It works quite well. But LVM is not aware of PowerPath's multiple path aggregation, so when it scans the PVs on the LUN's partitions, it "finds" duplicates for the PVs, like this:
[root@csumccaixa12 network-scripts]# pvscan
Found duplicate PV 7v9XUzPHIRqe6E0fA6hgCR3ybeaJoiWm: using /dev/sdc1 not /dev/emcpowerb1 Found duplicate PV 3eKnMIm00kg6DXn4MW1UX9QCFh96ykwG: using /dev/emcpowerc1 not /dev/sdb1 Found duplicate PV 3T00PR5Ky1XrBesYHRtyowoBQLWDO1kd: using /dev/sdd1 not /dev/emcpowera1 Found duplicate PV 3eKnMIm00kg6DXn4MW1UX9QCFh96ykwG: using /dev/sde1 not /dev/emcpowerc1 Found duplicate PV 7v9XUzPHIRqe6E0fA6hgCR3ybeaJoiWm: using /dev/sdf1 not /dev/sdc1 Found duplicate PV 3T00PR5Ky1XrBesYHRtyowoBQLWDO1kd: using /dev/sdg1 not /dev/sdd1
  PV /dev/sda3   VG vg0   lvm2 [59.81 GB / 37.75 GB free]
  PV /dev/sdg1            lvm2 [127.43 GB]
  PV /dev/sde1            lvm2 [127.43 GB]
  PV /dev/sdf1            lvm2 [127.43 GB]
  Total: 4 [442.10 GB] / in use: 1 [59.81 GB] / in no VG: 3 [382.29 GB]

You can see above that the /dev/emcpowerX devices were declined in favor of the real Linux devices. "vg0" is a VG in the internal disks (/dev/sda).

The problem I see here is that whenever the specific device that LVM2 chose goes down because of a link failure, LVM will not automatically failover to another device, will it? In my tests it didn't.

Another matter is that using the /dev/emcpowerX devices I have also load balancing, so even if LVM2 did failover to the other paths (the other devices), I would loose the load balancing feature I can achieve with PowerPath.


Question 1: did anyone solve this problem? Does device-mapper-multipath solve this problem?

Question 2: is there a way to "force" which devices LVM should employ when scanning the PVs over the disks Linux recognize?


Thank you all for any hints on this.

Regards,

Celso.
--
*Celso Kopp Webber*

celso@xxxxxxxxxxxxxxxx <mailto:celso@xxxxxxxxxxxxxxxx>

*Webbertek - Opensource Knowledge*
(41) 8813-1919
(41) 3284-3035


--
Esta mensagem foi verificada pelo sistema de antivírus e
acredita-se estar livre de perigo.

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux