node2 = nodeid 2
node1:
vgchange -a ey vg11
1 logical volume(s) in volume group "vg11" now active
[root@node1 ~]# lvs
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
lvol1 vg11 -wi-a- 6.00G
[root@node1 ~]# ldebug
id nodeid remid pid xid exflags flags sts grmode rqmode time_ms r_nodeid r_len r_name
39a0001 0 0 434 0 1 1 2 5 -1 0 0 64 "iZ8vgn7nBm05aMSo5cfpy63rflTqL2ryr3Xrp1prEGceCkA2dhSA2ENWocunEfdf"
[root@node1 ~]# cdebug
Resource ffff81010abd6e00 Name (len=64) "iZ8vgn7nBm05aMSo5cfpy63rflTqL2ryr3Xrp1prEGceCkA2dhSA2ENWocunEfdf"
Master Copy
Granted Queue
039a0001 EX
Conversion Queue
Waiting Queue
[root@node1 ~]# mount /dev/vg11/lvol1 /mnt
node2:
[root@node2 ~]# vgchange -a ey vg11
Error locking on node node2: Volume is busy on another node
0 logical volume(s) in volume group "vg11" now active
ldebug
nothing
cdebug
nothing
[root@node2 ~]# vgchange -a n vg11
Error locking on node node1: LV vg11/lvol1 in use: not deactivating
0 logical volume(s) in volume group "vg11" now active
# vg11/lvol1 is already mounted on node1 !
[root@node2 ~]# vgchange -a y vg11
1 logical volume(s) in volume group "vg11" now active
[root@node2 ~]# mount /dev/vg11/lvol1 /mnt
success
# ..it happens ! ;-)
Hi Rafael,
Good testing, it confirms that some additional barriers are necessary to prevent undesired behaviours.I'll test by tomorrow the same procedure at VG level.
2009/7/30 Rafael Micó Miranda <rmicmirregs@xxxxxxxxx>
Hi Brem
El jue, 30-07-2009 a las 09:15 +0200, brem belguebli escribió:
> Hi,
>
> does it look like we're hiting some "undesired feature" ;-)
>
> Concerning the 0 nodeid, I think I read that on some Redhat documents
> or bugzilla report, I could find it out.
>
> Brem
>
>
>
>
> --I made some test on my lab environment too, i attach the results in the
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster
TXT file.
My conclusions:
1.- lovgols with exclusive flag must be used over clustered volume
groups (obvious and already known)
2.- logvols activated with exclusive flag must be handled EXCLUSIVELY
with the exclusive flag
---> as part of my lvm-cluster.sh resource script, the exclusive flag is
part of the resource definition in cluster.conf so this is correctly
handled
3.- you can activate an already active exclusive logvol on any node if
you dont take into accout, during the activation, the exclusive flag
4.- in use (opened) logvols are protected from deactivation from
secondary nodes, even from main node
5.- after a node failure (hang-up, fencing...) logvol is not open
anymore, so it can be exclusively activated on a new node
All this was tested manually, but this is the expected behaviour on
lvm-cluster.sh resource script.
Link to lvm-cluster.sh resource script:
https://www.redhat.com/archives/cluster-devel/2009-June/msg00020.html
Cheers,
Rafael
--
Rafael Micó Miranda
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster
-- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster