ha-lvm

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Folks,

I have a 5 node cluster backed by an FC SAN with 5 VG's each with a single LVM.

I am using ha_lvm and have lvm.conf configured to use tags as per the
instructions. Things work fine until I try to migrate the volume
containing our home dir (all others work as expected) The umount for
that volume fails and depending on the active config, the node reboots
itself (self_fence=1) or it simply fails and get's disabled.

lsof doesn't reveal anything "holding" onto that mount point yet the
umount fails consistently (force_umount is enabled)

Furthermore, it appears I have at least one ov my VG's with bad tags,
is there a way to show what tags a VG has?

I've gone over the config several times and although I cannot show the
config, here is a basic rundown in case something jumps out...

5 nodes, dl360g5 2xQcore w/16GB ram
EVA8100
2x4GB FC, multipath
5VG's each w/a single lv each with an ext3 fs.
ha lvm in is use as a measure of protection for the ext3 fs's
local locking only via lvm.conf
tags enabled via lvm.conf
initrd's are newer than the lvm.conf changes.

I did notice that the ext3 label in use on the home volume was not of
the form /home (it was /ha_home) from early testing but I've corrected
that and the umount fail still occurs.

If anyone has any ideas I'd appreciate it.

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux