Re: Fwd: CLVM exclusive mode

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I think you've misunderstood what vgchange -aey does,

It activates all the currently existing LVs in that VG exclusively on that node. If you create another LV in that VG then it's activated normally on all nodes in the cluster.

Chrissie

On 29/07/09 14:55, brem belguebli wrote:
Hello,

I've been playing on RHEL 5.3 with CLVM and exclusive activation but the
results I'm getting are not what I'm expecting.

- My cluster is a freshly 2 nodes (node1 and node2) installed cluster
with the packages shipped by RHEL 5.3 X86_64.

- LVM locking type =  3

- a San LUN (/dev/mpath/mpath2) visible from both nodes

- dlm used as lock_manager

Everything starts normally from cman to clvmd.

Below what I'm doing

On node1:

  pvcreate /dev/mpath/mpath2
  vgcreate -c n vg11 /dev/mpath/mpath2

! nothing in /debug/dlm/clvmd_locks on both nodes

   vgchange -a n vg11

! nothing in /debug/dlm/clvmd_locks on both nodes

   vgchange -c y vg11

! nothing in /debug/dlm/clvmd_locks on both nodes, vg seen on both nodes
as clustered.

   vgchange -a ey vg11

! nothing in /debug/dlm/clvmd_locks on both nodes

  lvcreate -n lvol1 -L 6G /dev/vg11

On node1 cat /debug/dlm/clvmd_locks gives:

   6f0001 2 3da0001 2204 0 1 10001 2 1 -1 0 0 64
"iZ8vgn7nBm05aMSo5cfpy63rflTqL2ryr3Xrp1prEGceCkA2dhSA2ENWocunEfdf"
   38a0001 0 0 434 0 1 1 2 1 -1 0 0 64
"iZ8vgn7nBm05aMSo5cfpy63rflTqL2ryr3Xrp1prEGceCkA2dhSA2ENWocunEfdf"

On node2:

   3da0001 1 6f0001 2204 0 1 1 2 1 -1 0 1 64
"iZ8vgn7nBm05aMSo5cfpy63rflTqL2ryr3Xrp1prEGceCkA2dhSA2ENWocunEfdf"

Is there something I'm doing wrong or misunderstand?
I understand that node1 (which actually activated exclusively the vg)
sees a lock on /dev/vg11/lvol1 (uuid corresponding to it) from node id 2
wich is node2
plus a lock from node id 0 (which seems to be the quorun disk id which
is not configured in my case).

Plus, node2 seems to see the right lock from node1.

I go on:

on both nodes, lvdisplay -v /dev/vg11/lvol1 gives:

...
  LV UUID                r3Xrp1-prEG-ceCk-A2dh-SA2E-NWoc-unEfdf
   LV Write Access        read/write
   LV Status              available
...

Shouldn't it be seen NOT available on node2 ?

Now, on node2:

vgchange -a y vg11 :

  1 logical volume(s) in volume group "vg11" now active <-- vg was
supposed to be held exlusively by node1

cat  /debug/dlm/clvmd_locks gives:

3da0001 1 6f0001 2204 0 1 1 2 1 -1 0 1 64
"iZ8vgn7nBm05aMSo5cfpy63rflTqL2ryr3Xrp1prEGceCkA2dhSA2ENWocunEfdf"

on node1:

6f0001 2 3da0001 2204 0 1 10001 2 1 -1 0 0 64
"iZ8vgn7nBm05aMSo5cfpy63rflTqL2ryr3Xrp1prEGceCkA2dhSA2ENWocunEfdf"
38a0001 0 0 434 0 1 1 2 1 -1 0 0 64
"iZ8vgn7nBm05aMSo5cfpy63rflTqL2ryr3Xrp1prEGceCkA2dhSA2ENWocunEfdf"

I may be missing something in my procedure that makes it do everything
except what I'm expecting.

Any ideas ?


------------------------------------------------------------------------

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux