Re: LVM snapshot with Clustered VG [SOLVED]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dne 15.3.2013 13:53, Vladislav Bogdanov napsal(a):
15.03.2013 12:37, Zdenek Kabelac wrote:
Dne 15.3.2013 10:29, Vladislav Bogdanov napsal(a):
15.03.2013 12:00, Zdenek Kabelac wrote:
Dne 14.3.2013 22:57, Andreas Pflug napsal(a):
On 03/13/13 19:30, Vladislav Bogdanov wrote:

You could activate LVs with the above syntax [ael]
(there is a tag support - so you could exclusively activate LV on remote
node in via some configuration tags)

Could you please explain this - I do not see anything relevant in man pages.

Let's say - you have 3 nodes  A, B, C - each have a TAG_A, TAG_B, TAG_C,
then on node A you may exclusively activate LV which has TAG_B - this
will try to exclusively active LV on the node which has it configured
in lvm.conf  (see the  volume_list= [])



And you want to 'upgrade' remote locks to something else ?

Yes, shared-to-exclusive end vice verse.

So how do you convert the lock from   shared to exclusive without unlock
(if I get it right - you keep the ConcurrentRead lock - and you want to take Exlusive - to make change state from 'active' to 'active exlusive')
https://en.wikipedia.org/wiki/Distributed_lock_manager

Clvmd 'communicates' via these locks.
So the proper algorithm needs to be there for ending with some proper state after locks changes (and sorry I'm not a dlm expert here)



What would be the use-case you could not resolve with current command
line args?

I need to convert lock on a remote node during last stage of ver3
migration in libvirt/qemu. That is a "confirm" stage, which runs on a
"source" node, during which "old" vm is killed and disk is released.
So, I first ("begin" stage) convert lock from exclusive to shared on a
source node, then obtain shared lock on a target node (during "prepare"

Which most probably works only your environment - since you do not try to
'break' the logic - but it's probably not a generic concept - i.e.
in this controlled environment you may live probably happily even with
local activation of LVs - since you always know what the tool is doing.

There is no other events on a destination node in ver3 migration
protocol, so I'm unable to convert lock to exclusive there after
migration is finished. So I do that from a source node, after it
released lock.


Is that supported by dlm (since lvm locks are mapped to dlm)?
Command just sent to a specific clvm instance and performed there.

As said - the 'lock' is the thing which controls the activation state.
So faking it on the software level may possible lead to inconsistency between the dlm and clvmd view of the lock state.

Zdenek

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux