Re: UDPu transport for public IP addresses?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Everything became fine when I specified IP-addresses (not hostname) in nodelist.ringX_addr and also assign quorum.provider=corosync_votequorum.

On Mon, Jan 5, 2015 at 5:17 PM, Steven Dake <steven.dake@xxxxxxxxx> wrote:
Dmitry,

Corosync UDPU should work with routed packets, although this is very difficult for the community to test in general.  We don't often have two machines on an external network where we can route packets.

Corosync should be able to do the job, with the caveat that you might get false positives because your timers are too short.  Did you have a look at the logs and make sure that a stable ring is forming and staying active?  If you could attach the logs for debug, that would be helpful.

Regards
-steve


On Sun, Dec 28, 2014 at 8:10 PM, Dmitry Koterov <dmitry.koterov@xxxxxxxxx> wrote:
Hello.

I have a geographically distributed cluster, all machines have public IP addresses. No virtual IP subnet exists, so no multicast is available.

I thought that UDPu transport can work in such environment, doesn't it?

To test everything in advance, I've set up a corosync+pacemaker on Ubuntu 14.04 with the following corosync.conf:

totem {
  transport: udpu
  interface {
        ringnumber: 0
        bindnetaddr: ip-address-of-the-current-machine
        mcastport: 5405
  }
}
nodelist {
  node {
    ring0_addr: node1
  }
  node {
    ring0_addr: node2
  }
}
...

(here node1 and node2 are hostnames from /etc/hosts on both machines). After running "service corosync start; service pacemaker start" logs show no problems, but actually both nodes are always offline:

root@node1:/etc/corosync# crm status | grep node
OFFLINE: [ node1 node2 ]

and "crm node online" (as all other attempts to make crm to do something) are timed out with "communication error".

No iptables, selinux, apparmor and other bullshit are active: just pure virtual machines with single public IP addresses on each. Also tcpdump shows that UDB packets on port 5405 are going in and out, and if I e.g. stop corosync at node1, the tcpdump output at node2 changes significantly. So they see each other definitely.

And if I attach a gvpe adapter to these 2 machines with a private subnet and switch transport to the default one, corosync + pacemaker begin to work.

So my question is: what am I doing wrong? Maybe UDPu is not suitable for communications among machines with public IP addresses only?

_______________________________________________
discuss mailing list
discuss@xxxxxxxxxxxx
http://lists.corosync.org/mailman/listinfo/discuss



_______________________________________________
discuss mailing list
discuss@xxxxxxxxxxxx
http://lists.corosync.org/mailman/listinfo/discuss

[Index of Archives]     [Linux Clusters]     [Corosync Project]     [Linux USB Devel]     [Linux Audio Users]     [Photo]     [Yosemite News]    [Yosemite Photos]    [Linux Kernel]     [Linux SCSI]     [X.Org]

  Powered by Linux