For use corosync with udp look here http://www.thatsgeeky.com/2011/12/installing-corosync-on-ec2/
Figured it out! I wasn’t allowing multicast on the vlan interface on my switch. I never even thought about this as an issue because I’m not routing, the nodes are right next to each other. However, tcpdump showed that I was sending tons of multicast, but not receiving any. As soon as I enabled multicast on the vlan interface everything came up.
Also, in frustration I dropped cman, rgmanager, modclusterd in favor of corosync + pacemaker, but I believe I would have been as successful with either option. I think I prefer stonith to fencing, though.
Lastly, is there a way to use unicast? I realize that multicast would be greatly preferable in 3+ node clusters, but in this two node it would be easier to use unicast.
Jamison Maxwell
Sr. Systems Administrator
HD Supply - Facilities Maintenance
From: linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of emmanuel segura
Sent: Sunday, February 24, 2013 1:32 PM
To: linux clustering
Subject: Re: Cannot connect to rgmanager
Hello Sorry
For my late reply, try to configure your fence devices and after start the rgmanager service2013/2/19 Maxwell, Jamison [HDS] <JMaxwell@xxxxxxxx>
Yes, rgmanager, cman, ricci, and modclustered are started and start automatically in run levels three through five…
Jamison Maxwell
Sr. Systems AdministratorHD Supply - Facilities Maintenance
From: linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of emmanuel segura
Sent: Tuesday, February 19, 2013 11:03 AM
To: linux clustering
Subject: Re: Cannot connect to rgmanager
did you started rgmanager?
2013/2/19 Maxwell, Jamison [HDS] <JMaxwell@xxxxxxxx>
I am attempting to create a two node cluster where the only resource required would be a shared IP address, however, after a couple of attempts I continue to fail. I have followed a guide located at http://www.openlogic.com/wazi/bid/188071/ . Everything appears to work fine until I get to the point where I actually add the IP address resource, both cluster nodes appear as online and quorate and the configuration validates, but will not enable the new resource. Below I am including any information that I think may be relevant, but feel free to ask for more.
===========================================
[root@ hostname]# cat /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster config_version="4" name="hacluster">
<cman expected_votes="1" two_node="1"/>
<clusternodes>
<clusternode name="actual name of node" nodeid="1" votes="1">
<fence>
<method name="single"/>
</fence>
</clusternode>
<clusternode name=" actual name of node " nodeid="2" votes="1">
<fence>
<method name="single"/>
</fence>
</clusternode>
</clusternodes>
<fencedevices/>
<rm>
<failoverdomains/>
<resources/>
<service autostart="1" exclusive="0" name="IP" recovery="relocate">
<ip address="actual shared IP address" monitor_link="on" sleeptime="10"/>
</service>
</rm>
</cluster>
===========================================
[root@ hostname]# clusvcadm -e IP
Local machine trying to enable service:IP...Could not connect to resource group manager
===========================================
[root@ hostname]# strace clusvcadm -e IP
…
connect(5, {sa_family=AF_FILE, path="/var/run/cluster/rgmanager.sk"}, 110) = -1 ENOENT (No such file or directory)
close(5) = 0
write(1, "Could not connect to resource gr"..., 44Could not connect to resource group manager
) = 44
exit_group(1) = ?
===========================================
I would most like to call your attention to the line “write(1, " connect(5, {sa_family=AF_FILE, path="/var/run/cluster/rgmanager.sk"}, 110) = -1 ENOENT (No such file or directory)”. There was also a someone who mailed this list with what appears to be the same problem, however, no issue is present in the conversation. The topic is located here: http://www.redhat.com/archives/linux-cluster/2012-August/msg00156.html .
This is version 6.3, no iptables and no selinux until I can get this working. I greatly appreciate any assistance that can be offered.
Jamison Maxwell
Sr. Systems Administrator
HD Suppy - Facilities Maintenance
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster
--
esta es mi vida e me la vivo hasta que dios quiera
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster
--
esta es mi vida e me la vivo hasta que dios quiera
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster
--
esta es mi vida e me la vivo hasta que dios quiera
-- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster