CLVM exclusive mode

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Here's the test I've done :
 
- Cluster formed of 2 nodes A and B, RHEL5.3 X86_64
- dlm used as lock_manager
- - 

Hello,

I've been playing on RHEL 5.3 with CLVM and exclusive activation but the results I'm getting are not what I'm expecting.

- My cluster is a freshly 2 nodes (node1 and node2) installed cluster with the packages shipped by RHEL 5.3 X86_64.

- LVM locking type =  3

- a San LUN (/dev/mpath/mpath2) visible from both nodes

- dlm used as lock_manager

Everything starts normally from cman to clvmd.

Below what I'm doing

On node1:

 pvcreate /dev/mpath/mpath2
 vgcreate -c n vg11 /dev/mpath/mpath2

! nothing in /debug/dlm/clvmd_locks on both nodes
 
  vgchange -a n vg11

! nothing in /debug/dlm/clvmd_locks on both nodes


  vgchange -c y vg11

! nothing in /debug/dlm/clvmd_locks on both nodes, vg seen on both nodes as clustered.
 
  vgchange -a ey vg11

! nothing in /debug/dlm/clvmd_locks on both nodes

 lvcreate -n lvol1 -L 6G /dev/vg11

On node1 cat /debug/dlm/clvmd_locks gives:

  6f0001 2 3da0001 2204 0 1 10001 2 1 -1 0 0 64 "iZ8vgn7nBm05aMSo5cfpy63rflTqL2ryr3Xrp1prEGceCkA2dhSA2ENWocunEfdf"
  38a0001 0 0 434 0 1 1 2 1 -1 0 0 64 "iZ8vgn7nBm05aMSo5cfpy63rflTqL2ryr3Xrp1prEGceCkA2dhSA2ENWocunEfdf"

On node2:

  3da0001 1 6f0001 2204 0 1 1 2 1 -1 0 1 64 "iZ8vgn7nBm05aMSo5cfpy63rflTqL2ryr3Xrp1prEGceCkA2dhSA2ENWocunEfdf"

Is there something I'm doing wrong or misunderstand?
I understand that node1 (which actually activated exclusively the vg) sees a lock on /dev/vg11/lvol1 (uuid corresponding to it) from node id 2 wich is node2
plus a lock from node id 0 (which seems to be the quorun disk id which is not configured in my case).

Plus, node2 seems to see the right lock from node1.

I go on:

on both nodes, lvdisplay -v /dev/vg11/lvol1 gives:

...
 LV UUID                r3Xrp1-prEG-ceCk-A2dh-SA2E-NWoc-unEfdf
  LV Write Access        read/write
  LV Status              available
...

Shouldn't it be seen NOT available on node2 ?

Now, on node2:

vgchange -a y vg11 :

 1 logical volume(s) in volume group "vg11" now active <-- vg was supposed to be held exlusively by node1

cat  /debug/dlm/clvmd_locks gives:

3da0001 1 6f0001 2204 0 1 1 2 1 -1 0 1 64 "iZ8vgn7nBm05aMSo5cfpy63rflTqL2ryr3Xrp1prEGceCkA2dhSA2ENWocunEfdf"

on node1:

6f0001 2 3da0001 2204 0 1 10001 2 1 -1 0 0 64 "iZ8vgn7nBm05aMSo5cfpy63rflTqL2ryr3Xrp1prEGceCkA2dhSA2ENWocunEfdf"
38a0001 0 0 434 0 1 1 2 1 -1 0 0 64 "iZ8vgn7nBm05aMSo5cfpy63rflTqL2ryr3Xrp1prEGceCkA2dhSA2ENWocunEfdf"

 

I may be missing something in my procedure that makes it do everything except what I'm expecting.

Any ideas ?

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux