You are certainly correct. I neglected to mention that I'd also
checked for logged in users as well and there were none. Thank for
this anyway, I appretiate the feedback.
Corey
Sent from my iPod
On Nov 3, 2010, at 2:15 AM, "Jankowski, Chris"
<Chris.Jankowski@xxxxxx> wrote:
Corey,
I vaguely remember from my work on UNIX clusters many years ago that
if /dir is the mount point of a mounted filesystem then cd /dir or
into any directory below /dir from an interactive shell will prevent
an unmount of the filesystem i.e. umount /dir will fail. I believe
that this restriction is because it will create an inconsistency in
the state of the shell process. lsof will not show it.
Of course most users after login end up in the home directory by
default.
I believe that Linux will have the same semantics as UNIX. You can
test that easily on a standalone Linux box.
Regards,
Chris Jankowski
-----Original Message-----
From: linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-
bounces@xxxxxxxxxx] On Behalf Of Corey Kovacs
Sent: Wednesday, 3 November 2010 07:15
To: linux clustering
Subject: ha-lvm
Folks,
I have a 5 node cluster backed by an FC SAN with 5 VG's each with a
single LVM.
I am using ha_lvm and have lvm.conf configured to use tags as per
the instructions. Things work fine until I try to migrate the volume
containing our home dir (all others work as expected) The umount for
that volume fails and depending on the active config, the node
reboots itself (self_fence=1) or it simply fails and get's disabled.
lsof doesn't reveal anything "holding" onto that mount point yet the
umount fails consistently (force_umount is enabled)
Furthermore, it appears I have at least one ov my VG's with bad
tags, is there a way to show what tags a VG has?
I've gone over the config several times and although I cannot show
the config, here is a basic rundown in case something jumps out...
5 nodes, dl360g5 2xQcore w/16GB ram
EVA8100
2x4GB FC, multipath
5VG's each w/a single lv each with an ext3 fs.
ha lvm in is use as a measure of protection for the ext3 fs's local
locking only via lvm.conf tags enabled via lvm.conf initrd's are
newer than the lvm.conf changes.
I did notice that the ext3 label in use on the home volume was not
of the form /home (it was /ha_home) from early testing but I've
corrected that and the umount fail still occurs.
If anyone has any ideas I'd appreciate it.
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster