Hi David,
Thank for your reply.
more questions,
On 7/9/2020 12:05 AM, David Teigland wrote:
On Wed, Jul 08, 2020 at 03:55:55AM +0000, Gang He wrote:
but I cannot do online LV reduce from one node,
the workaround is to switch VG activation_mode to exclusive, run lvreduce command on the node where VG is activated.
Does this behaviour is by-design? or a bug?
It was intentional since shrinking the cluster fs and LV isn't very common
(not supported for gfs2).
OK, thank for confirmation.
For pvmove command, I cannot do online pvmove from one node,
The workaround is to switch VG activation_mode to exclusive, run pvmove command on the node where VG is activated.
Does this behaviour is by-design? do we do some enhancements in the furture?
or any workaround to run pvmove under shared activation_mode? e.g. --lockopt option can help this situation?
pvmove is implemented with mirroring, so that mirroring would need to be
replaced with something that works with concurrent access, e.g. cluster md
raid1. I suspect there are better approaches than pvmove to solve the
broader problem.
Sorry, I am a little confused.
In the future, we can do online pvmove when VG is activated in shared
mode? from man page, I feel these limitations are temporary (or Not Yet
Complete).
By the way, --lockopt option can help this situation? I cannot find the
detailed description for this option in manpage.
Thanks
Gang
Dave
_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/