Re: [Openais] Openais Digest, Vol 90, Issue 1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Team

Resent as I was not a member of mail list so submission was rejected the first time...

Darren

On 27 January 2012 14:16, Darren Thompson <darrent@xxxxxxxxxxxxx> wrote:
mumtaz

I'm still not convinced that your use of corosync ring redundancy is even solving the correct problem in your case, it looks to me that you have an invalid network configuration with the two interfaces on the same subnet, that may be the root of your problems.(You have two interfaces in the same lan, each with separate IP addresses... I'm not sure that is even good practice).

I'm not sure why you say this: "Also, bonding of interfaces does not work for me as I need to interfaces each with a separate address." as I have regularly used exactly that configuration without error for the last two or so years...

If you want to separate the Heatrtbeat traffic from other IO traffic you could just setup VLAN interfaces over the top of the bond.

In either case if you use 802.3ad mode it gives you almost twice the bandwidth per host, so you get fault tolerance and more bandwidth... win/win.

Try it, you may be surprised...

Regards
Darren


On 26 January 2012 10:07, M Siddiqui <msiddiqui@xxxxxxxxxxx> wrote:

Date: Wed, 25 Jan 2012 14:02:39 +1100
From: Tim Serong <tserong@xxxxxxxx>
To: openais@xxxxxxxxxxxxxxxxxxxxxxxxxx, discuss@xxxxxxxxxxxx
Subject: Re: [Openais] HA Cluster Connected over VPN
Message-ID: <4F1F70CF.6030705@xxxxxxxx>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

On 01/25/2012 12:32 PM, M Siddiqui wrote:
> Hi there,
>
> I have a situation where two cluster nodes are connected over the VPN;
> each node
> is configured with two interfaces to provide ring redundancy for corosync:
>
> NODE1:
>    eth1: 192.168.1.111/24 <http://192.168.1.111/24>
>    eth2: 192.168.1.112/24 <http://192.168.1.112/24>
>
> NODE2:
>    eth1: 192.168.1.113/24 <http://192.168.1.113/24>
>    eth2: 192.168.1.114/24 <http://192.168.1.114/24>
>
> corosync version 1.4.2
> transport udpu (multicast has the same issue)
>
> Since two nodes are geographically distributed and connected over the VPN,
> configuring each interface in a different subnet is not an option here.
>
> Now corosync got confused due to same subnet; how we can handle this
> situation?
> What is the experts recommendation? Thanks in advance for the answer.

I'm pretty sure if you're doing multiple rings, they need to be on
separate subnets.  Question: if you're going over a single openVPN
instance, you only really have one communication path between the nodes,
right?  In which case, redundant rings won't actually help.

I see. Thanks! 

Actually in my setup I am using two interfaces on each node: 
eth1 for heartbeat and eth2 for some data aggregation from other
hosts on the same network as well as hosts across the VPN.

Now I agree there in one communication path for hosts across the
VPN but we can avoid congestion while aggregating data from hosts
on the same network; (I mean all host on one end of VPN). In this
situation, even if we don't configure eth2 as a backup ring in corosync.conf 
still corosync got confused and does not work.

Also, bonding of interfaces does not work for me as I need to interfaces
each with a separate address.

regards,
mumtaz


Also, you probably want the discuss@xxxxxxxxxxxx list.
openais@xxxxxxxxxxxxxxxxxxxxxxxxxx is deprecated, for lack of a better term.

Regards,

Tim
--
Tim Serong
Senior Clustering Engineer
SUSE
tserong@xxxxxxxx


_______________________________________________
Openais mailing list
Openais@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/openais


_______________________________________________
discuss mailing list
discuss@xxxxxxxxxxxx
http://lists.corosync.org/mailman/listinfo/discuss

[Index of Archives]     [Linux Clusters]     [Corosync Project]     [Linux USB Devel]     [Linux Audio Users]     [Photo]     [Yosemite News]    [Yosemite Photos]    [Linux Kernel]     [Linux SCSI]     [X.Org]

  Powered by Linux