I got my logical volume activated via a "lvchange -ay /dev/vg/new_lv" and was able to mount it.
I just wondered why clvmd/gfs did not handle this, as I have seen it before.
Gary Romo
Gary Romo/Denver/IBM@IBMUS
Sent by: linux-cluster-bounces@xxxxxxxxxx 01/15/2009 09:08 AM
|
|
Why can't I mount my gfs logical volume on the second node in the cluster?
I am creating a new GFS file system on an existing cluster. Here is what I did;
1. I determined I had space in an existing volume group (both nodes)
2. I created my logical volume (node 1)
3. I ran my gfs_mkfs (node 1)
4. I mounted my new lv on node 1 only
Here is the error I get on node 2
# mount /gfs/new_mount
/sbin/mount.gfs: invalid device path "/dev/vggfs/new_lv"
I see that the logical volume is "inactive" on node2 and "ACTIVE" on node 1
inactive '/dev/vgclgfs/new_lv' [25.00 GB] inherit
ACTIVE '/dev/vgclgfs/new_lv' [25.00 GB] inherit
What do I need to do in order to make this logical volume active on node 2 ?
I thought that this would have happened automagically via clvmd, and not have to be done manually.
Gary Romo--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster
-- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster