CLVMD without GFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,
 
I think there is something to clarify about using CLVM across a cluster in a active/passive mode without GFS.
 
From my understanding, CLVM keeps LVM metadata coherent among the cluster nodes and provides a cluster wide locking mechanism that can prevent any node from trying to activate a volume group if it has been activated exclusively (vgchange -a e VGXXX)  by another node (which needs to be up).
 
I have been playing with it to check this behaviour but it doesn't seem to make what is expected.
 
I have 2 nodes (RHEL 5.3 X86_64, cluster installed and configured) , A and B using a SAN shared storage.
 
I  have a LUN from this SAN seen by both nodes, pvcreate'd /dev/mpath/mpath0 , vgcreate'd vg10 and lvcreate'd lvol1 (on one node), created an ext3 FS on /dev/vg10/lvol1
 
CLVM is running in debug mode (clvmd -d2 ) (but it complains about locking disabled though locking set to 3 on both nodes)
 
On node A:
 
          vgchange -c y vg10 returns OK (vgs -->  vg10     1   1   0 wz--nc)
 
          vgchange -a e --> OK
 
          lvs returns lvol1   vg10   -wi-a-
 
On node B (while things are active on A, A is UP and member of the cluster ):
 
          vgchange -a e --> Error locking on node B: Volume is busy on another node
                                   1 logical volume(s) in volume group "vg10" now active
 
It activates vg10 even if it sees it busy on another node .
 
on B, lvs returns lvol1   vg10   -wi-a-
 
as well as on A.
 
I think the main problem comes from the fact that, as it is said when starting CLVM in debug mode,  WARNING: Locking disabled. Be careful! This could corrupt your metadata.
 
IMHO, the algorithm should be as follows:
 
VG is tagged as clustered (vgchange -c y VGXXX)
 
if a node (node P) tries to activate the VG exclusively (vgchange -a VGXXX)
 
ask the lock manager to check if VG is not already locked by another node (node X)
 
if so, check if node X is up
 
if node X is down, return OK to node P
 
else 
 
return NOK to node P (explicitely that VG is held exclusively by node X)
 
Brem
 
PS: this shouldn't be a problem with GFS or other clustered FS (OCFS, etc...) as no node should try to activate exclusively any VG.
 
 
 
 
 
 
 
 
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux