Re: doubts about using clvm

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Bowie Bailey wrote:
carlopmart wrote:
Thanks Erling. But I have last question. I will try to combine two
disks on GFS clients side. If I understand correct, first i need to
import gnbd devices on both GFS nodes, right??. And second, I need to
setup lvm from GFS nodes and start clvm service on both nodes too.
But, I need to create shared lvm disk on both GFS nodes or only on one
node???

I am doing the same thing on my server using AoE drives rather than
GNBD.

You create the clvm volumes and GFS filesystem(s) from one node, and then
use "vgscan" to load it all in on the second node.
When a node goes down/is rebooted, how do you restore the "down, closewait" state on the remaining nodes that refer to that vblade/vblade-kernel?

The "solution" appears to be stop lvm (to release open file handles to the /dev/etherd/e?.? devices), unload "aoe", and reload "aoe". On the remaining "good" nodes.

This particular problem has me looking at gnbd devices again.

If aoe were truly stateless, and the aoe clients could recover seamlessly on the restore of a vblade server, I'd have no issues.

- Ian C. Blenke <ian@xxxxxxxxxx> http://ian.blenke.com/

--

Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux