Terry Davis wrote:
Awesome. I rebooted and applied all available updates and now it
works. Only thing worth noting in the updates was a kernel update to
2.6.18-92.1.13.el5. I think a reboot did it (for some reason).
On Wed, Oct 1, 2008 at 12:06 PM, Terry Davis <terrybdavis@xxxxxxxxx
<mailto:terrybdavis@xxxxxxxxx>> wrote:
On Wed, Oct 1, 2008 at 11:42 AM, Alasdair G Kergon <agk@xxxxxxxxxx
<mailto:agk@xxxxxxxxxx>> wrote:
I hope that problem was fixed in newer packages.
Meanwhile try running 'clvmd -R' between some of the commands.
If all else fails, you may have to kill the clvmd daemons in
the cluster
and restart them, or even add a 'vgscan' on each node before
the restart.
Alasdair
--
agk@xxxxxxxxxx <mailto:agk@xxxxxxxxxx>
Just a sanity check. I killed all the clvmd daemons and started
clvmd back up. I created the PV on node A:
[root@omadvnfs01a ~]# pvcreate /dev/sdh1
Physical volume "/dev/sdh1" successfully created
Node B knows nothing of /dev/sdh1 but it does exist:
[root@omadvnfs01b ~]# ls /dev/sdh*
/dev/sdh
This is the problem. If you partition the device on one node, you must
do a 'partprobe' on all nodes so that they update their partition
tables. Without doing this LVM has no idea what /dev/sdh1 is and
therefore cannot lock on it. After running partprobe do 'clvmd -R' so
that clvmd reloads its device cache and knows which devices are
available. After that you can proceed with pvcreate, vgcreate,
lvcreate, etc.
John
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster