Fwd: Re: [Openais] HA Cluster Connected over VPN

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This followup came to me, but didn't seem to go to the lists, so forwarding for reference.

-------- Original Message --------
Subject: 	Re: [Openais] HA Cluster Connected over VPN
Date: 	Wed, 25 Jan 2012 14:47:18 +1030
From: 	Darren Thompson <darrent@xxxxxxxxxxxxx>
To: 	Tim Serong <tserong@xxxxxxxx>



Tim

Rather than configuring the two interfaces with IP addresses, you could
bond them. This will give you (more reliable) ring redundancy and
resolve the issue you are having with multiple interfaces into the same
sub-net.

The use of Bonding was (still is?) the preferred way of providing
protection from a single NIC failing in a Corosync Cluster.

refer:
Some Documentation (Sorry SUSE based)
<http://www.suse.com/documentation/sle_ha/pdfdoc/book_sleha/book_sleha.pdf>

Where this has been discussed previously
<http://www.mail-archive.com/pacemaker@xxxxxxxxxxxxxxxxxxx/msg04925.html>


Regards
Darren

On 25 January 2012 13:32, Tim Serong <tserong@xxxxxxxx
<mailto:tserong@xxxxxxxx>> wrote:

    On 01/25/2012 12:32 PM, M Siddiqui wrote:

        Hi there,

        I have a situation where two cluster nodes are connected over
        the VPN;
        each node
        is configured with two interfaces to provide ring redundancy for
        corosync:

        NODE1:
           eth1: 192.168.1.111/24 <http://192.168.1.111/24>
        <http://192.168.1.111/24>
           eth2: 192.168.1.112/24 <http://192.168.1.112/24>
        <http://192.168.1.112/24>

        NODE2:
           eth1: 192.168.1.113/24 <http://192.168.1.113/24>
        <http://192.168.1.113/24>
           eth2: 192.168.1.114/24 <http://192.168.1.114/24>
        <http://192.168.1.114/24>


        corosync version 1.4.2
        transport udpu (multicast has the same issue)

        Since two nodes are geographically distributed and connected
        over the VPN,
        configuring each interface in a different subnet is not an
        option here.

Now corosync got confused due to same subnet; how we can handle this
        situation?
        What is the experts recommendation? Thanks in advance for the
        answer.


    I'm pretty sure if you're doing multiple rings, they need to be on
    separate subnets.  Question: if you're going over a single openVPN
    instance, you only really have one communication path between the
    nodes, right?  In which case, redundant rings won't actually help.

    Also, you probably want the discuss@xxxxxxxxxxxx
    <mailto:discuss@xxxxxxxxxxxx> list.
    openais@lists.linux-__foundation.org
    <mailto:openais@xxxxxxxxxxxxxxxxxxxxxxxxxx> is deprecated, for lack
    of a better term.

    Regards,

    Tim
    --
    Tim Serong
    Senior Clustering Engineer
    SUSE
    tserong@xxxxxxxx <mailto:tserong@xxxxxxxx>
    _________________________________________________
    Openais mailing list
    Openais@lists.linux-__foundation.org
    <mailto:Openais@xxxxxxxxxxxxxxxxxxxxxxxxxx>
    https://lists.linuxfoundation.__org/mailman/listinfo/openais
    <https://lists.linuxfoundation.org/mailman/listinfo/openais>


_______________________________________________
discuss mailing list
discuss@xxxxxxxxxxxx
http://lists.corosync.org/mailman/listinfo/discuss


[Index of Archives]     [Linux Clusters]     [Corosync Project]     [Linux USB Devel]     [Linux Audio Users]     [Photo]     [Yosemite News]    [Yosemite Photos]    [Linux Kernel]     [Linux SCSI]     [X.Org]

  Powered by Linux