Re: corosync issue with two interface directives

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, 

OK so how does that affect the fail over. Each f the networks is important if we lose ring 0 or ring 1 we need to fail over.

If I have the config stated below:
# Please read the corosync.conf.5 manual page
compatibility: whitetank

totem {
version: 2
secauth: on
threads: 0
interface {
ringnumber: 0
bindnetaddr: 10.251.96.160
#broadcast: yes
mcastaddr: 239.254.6.8
                mcastport: 5405
ttl: 1
}
interface {
ringnumber: 1
bindnetaddr: 10.122.147.192
#broadcast: yes
mcastaddr: 239.254.6.9
                mcastport: 5405
ttl: 1
}
}

logging {
fileline: off
to_stderr: no
to_logfile: yes
to_syslog: yes
logfile: /var/log/cluster/corosync.log
debug: off
timestamp: on
logger_subsys {
subsys: AMF
debug: off
}
}

amf {
mode: disabled
}

And I pull out the cable for the interface on ring 1 will it fail over ? Or will it use ring 1 only if ring 0 fails.

I read the documentation but it is less than clear :-)

I would just do it and pull the cable out but sadly it requires me to fly to Vienna to do it seems a little extravagant.

From: emmanuel segura <emi2fast@xxxxxxxxx>
Reply-To: linux clustering <linux-cluster@xxxxxxxxxx>
Date: Sun, 5 Feb 2012 20:14:14 +0100
To: linux clustering <linux-cluster@xxxxxxxxxx>
Subject: Re: corosync issue with two interface directives

I think the ringnumber must be diferent for every network

2012/2/5 Ben Shepherd <bshepherd@xxxxxxxxx>
Currently have a 2 node cluster. We configured HA on 1 network to take inbound traffic with multicast in corosync  and 1 VIP.

This works fine (most of the time sometimes if you take the cable out both interfaces end up with the VIP but that is another story)
Customer now has another network on which they want to take traffic. I have assigned the VIP on 

node lxnivrr45.at.inside 
node lxnivrr46.at.inside 
primitive failover-ip1 ocf:heartbeat:IPaddr 
params ip=" 10.251.96.185" 
op monitor interval="10s" 
 primitive failover-ip2 ocf:heartbeat:IPaddr 
params ip="10.2.150.201" 
op monitor interval="10s" 
colocation failover-ips inf: failover-ip1 failover-ip2 
property $id="cib-bootstrap-options" 
dc-version="1.1.5-5.el6-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" 
cluster-infrastructure="openais"  
expected-quorum-votes="2"  
no-quorum-policy="ignore"  
stonith-enabled="false" 
rsc_defaults $id="rsc-options"  
resource-stickiness="100" 

Current Corosync configuration is:

# Please read the corosync.conf.5 manual page
compatibility: whitetank

totem {
version: 2
secauth: on
threads: 0
interface {
ringnumber: 0
bindnetaddr: 10.251.96.160
#broadcast: yes
mcastaddr: 239.254.6.8
                mcastport: 5405
ttl: 1
}
}

logging {
fileline: off
to_stderr: no
to_logfile: yes
to_syslog: yes
logfile: /var/log/cluster/corosync.log
debug: off
timestamp: on
logger_subsys {
subsys: AMF
debug: off
}
}

amf {
mode: disabled
}

I am a little confused about using. Should I add the Multicast address for the 2nd Network as ring 1 or can I have 2 Interfaces on ring 0 on different networks ? 

Giving me:

# Please read the corosync.conf.5 manual page
compatibility: whitetank

totem {
version: 2
secauth: on
threads: 0
interface {
ringnumber: 0
bindnetaddr: 10.251.96.160
#broadcast: yes
mcastaddr: 239.254.6.8
                mcastport: 5405
ttl: 1
}
interface {
ringnumber: 0
bindnetaddr: 10.122.147.192
#broadcast: yes
mcastaddr: 239.254.6.9
                mcastport: 5405
ttl: 1
}
}

logging {
fileline: off
to_stderr: no
to_logfile: yes
to_syslog: yes
logfile: /var/log/cluster/corosync.log
debug: off
timestamp: on
logger_subsys {
subsys: AMF
debug: off
}
}

amf {
mode: disabled
}

Just need to make sure that if I lose either of the interfaces they VIP's fail over.

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



--
esta es mi vida e me la vivo hasta que dios quiera
-- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux