Zdenek,
Thanks for helping make this more clear to me :)
There are couple fuzzy sentences - so lets try to make them more clear.
Default mode for 'clvmd' is to 'share' resource everywhere - which
clearly comes from original 'gfs' requirement and 'linear/striped'
volume that can be easily activated on many nodes.
However over the time - different use-cases got more priority so
basically every new dm target (except mirror) does NOT support shared
storage (maybe raid will one day...). So targets like snapshot,
thin, cache, raid do require 'so called' exclusive activation.
Good to know the history about clvmd :)
So here comes the difference - lvmlockd in its default goes with
'exclusive/local' activation and shared (old clvmd default) needs to
be requested.
Another difference is - 'clvmd' world is 'automating' activation
around the whole cluster (so from node A it's possible to activate
volume on node B without ANY other command then 'lvchange).
With 'lvmlockd' mechanism - this was 'dropped' and it's users
responsibility to initiate i.e. ssh command with activation on another
node(s) and resolve error handling.
There are various pros&cons over each solution - both needs setups and
while 'clvmd' world is 'set & done' lvmlockd world scripting needs
to be born in some way.
True.
Also ATM 'lvmetad' can't be used even with lvmlockd - simply because
we are not (yet) capable to handle 'udev' around the cluster (and it's
not clear we ever will).
This sentence surprises me much. According to manpage of lvmlockd, it
seems clear that lvmlockd can work with lvmetad now.
IIRC, it's not the first time you mentioned about "cluster udev". It
gives me a impression that the currect udev system is not
100% reliable for shared disks in cluster, no matter if we use lvmetad
or not, right? If so, could you please give an example
scenario where lvmetad may not work well with lvmlockd?
On the positive side - we are working hard to enhance 'scanning' speed
- so in majority of use-cases there is no real performance gain with
lvmetad usage anyway.
Great! Thanks.
Regards,
Eric
_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/