Hi all!
I have 3 node test cluster utilizing SCSI fencing and GFS. I have made 2 GFS Logical Volumes - lvm1 and lvm2, both utilizing 5GB on 10GB disks. Testing the command line tools I did lvextend -L +1G /devicename to bring lvm2 to 6GB. This went fine without any problems. Then I issued command gfs_grow /mountpoint and the volume became inaccessible. Any command trying to access the volume hangs, and umount returns: /sbin/umount.gfs: /lvm2: device is busy.
Few questions - Since I have two volumes on this cluster and the lvm1 works just fine, would there be any suggestions to unmounting lvm2 in order to try and fix it?
Is gfs_grow - bug free or not (use/do not use)?
Is there any other way besides restarting the cluster/ nodes to get lvm2 back in operational state?
--
Alan A.
I have 3 node test cluster utilizing SCSI fencing and GFS. I have made 2 GFS Logical Volumes - lvm1 and lvm2, both utilizing 5GB on 10GB disks. Testing the command line tools I did lvextend -L +1G /devicename to bring lvm2 to 6GB. This went fine without any problems. Then I issued command gfs_grow /mountpoint and the volume became inaccessible. Any command trying to access the volume hangs, and umount returns: /sbin/umount.gfs: /lvm2: device is busy.
Few questions - Since I have two volumes on this cluster and the lvm1 works just fine, would there be any suggestions to unmounting lvm2 in order to try and fix it?
Is gfs_grow - bug free or not (use/do not use)?
Is there any other way besides restarting the cluster/ nodes to get lvm2 back in operational state?
--
Alan A.
-- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster