Re: LVM snapshot with Clustered VG

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 06.03.13 10:35, schrieb Vladislav Bogdanov:
06.03.2013 12:15, Andreas Pflug wrote:
I made sure it's not active on other nodes: lvchange -an vg/locktest ;
lvchange -aly vg/locktest
And do you run clvmd from that build tree as well?

Also, can you please try attached patch (on top of that one you have)? I
polished conversion a bit more, denying -an if volume is ex-locked
somewhere and other fixes to logic.
I tried that additional patch. I'm running this test versions on my test
node only (including clvmd), the other nodes are still running clvmd
2.2.95 (I guess this shouldn't matter since all are inactive). Same result:
I believe this matters, because error you see is received from a remote
node. Is node with ID 7400a8c0 local?
Yes, that's the test node.
Hm, not funny if I have to upgrade all nodes on the production system... I'm a little surprised that remote inactive nodes need to be aware of that force-exclusive stuff.

I'm running corosync 1.4.2 (debian wheezy).
Which cluster manager interface does clvmd detect? corosync or openais?
You should use former, openais one is(was) using LCK service which is
very unstable.
It's using openais. I'm not too happy about the stability, so maybe I'd switch to corosync now.
Could this be a reason for the x-lock failure as well?


Regards,
Andreas

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux