michael.osullivan@xxxxxxxxxxxxxx wrote:
Hi Mark,
clvmd is running fine on both nodes. The result of "service clvmd status" is
clvmd (pid xxxxx) is running...
active volumes: LogVol00 LogVol01
The result of vgscan is
Reading all physical volumes. This may take a while...
Found volume group "iscsi_raid_vg" using metadata type lvm2
Found volume group "VolGroup00" using metadata type lvm2
I just can't create a logical volume either from the command line or using
system-config-lvm...
Did you partition the device before adding a physical volume to it? If
so, did you run partprobe on both nodes? A common scenario is to
partition the device from node 1 and create a physical volume on it.
However the partition table is not automatically read on the second node
so it has no idea there is a partition there. When clvmd tells the
second node to activate a vg or lv on this unknown device, that node
responds that it can't lock on to the device since it has no idea what
it is. If you do end up in this situation then usually the solution is
to do this on both nodes
# rm /etc/lvm/cache/.cache
# partprobe
# clvmd -R
Then from one node:
# pvscan
# vgscan
# lvchange -ay vg/lv
Try this and see if it helps.
-John
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster