Hi all,
I've got a two-node Xen cluster I am trying to setup with a DRBD
partition under an LVM pv. My layout is:
* Both node's hard drive is a PV used by one VG.
* Each node has the host OS (dom0) install on an LV using a small
portion of the available space.
* The majority of the space is assigned to an LV on either node that
DRBD uses.
* The DRBD partition is itself setup as a new PV.
The problem is; Once I setup '/dev/drbd0' as a VG, I had to enable
cluster-aware mode by changing lvm.conf to:
filter = [ "a|drbd.*|", "r|.*|" ]
locking_type = 3
And then running:
lvmconf --enable-cluster
After that, when I run 'pvscan', I get this output on both nodes:
[root@an_san02 ~]# pvscan
connect() failed on local socket: Connection refused
WARNING: Falling back to local file-based locking.
Volume Groups with the clustered attribute will be inaccessible.
PV /dev/drbd0 lvm2 [399.99 GB]
Total: 1 [399.99 GB] / in use: 0 [0 ] / in no VG: 1 [399.99 GB]
How can I tell what or where the connection is failing? It *should*
be going over 'eth1' on both nodes which is outside of Xen's control.
I've also checked iptables and I don't see any relevant rules (I've
flushed the tables and there is only now Xen-related bridging rules).
Also, I can ping the other node over the dedicated DRBD interfaces.
Thanks for any insight!
Madi
_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/