DLM error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have a 2-node cluster (briscoe & mccoy) running RHEL4U2 with
associated CVS sources that I compiled yesterday (from that branch) and
I'm getting an error when starting CLVMd:

"DLM: connect from non cluster node"

Everything else (cman, ccs, fenced) starts correctly.  We have had this
cluster up for a while and we put it behind a load balancer (Cisco
CSS11500 or something) yesterday and changed the IP to rfc1918 addresses
while having the old public IPs directly mapped to the private IPs.  The
DNS for the hostnames, however, resolve to the public IP so I added the
private IPs into /etc/hosts:

10.0.3.10           briscoe briscoe.sys.oakland.edu
10.0.3.11           mccoy mccoy.sys.oakland.edu

and their external IPs are:    141.210.8.xxx

I've done a tcpdump between the nodes and the join source address is
indeed the 10.0.3.11, but I get the error on whichever node is booted
last.  I didn't see a bug in bugzilla, so if this isn't my problem
(which I think it is), I'll file a bug in bugzilla.  But what I'm
thinking is that for some reason it thinks that the other node is a
141.210 address, while the cluster resolves the node name to a 10.0.3.x
address, causing this.

Can anyone give me a tip or point me into the right direction?

Thanks,
    Andrew



Attachment: signature.asc
Description: OpenPGP digital signature

--

Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux