Hello Stefan,
I believe you are missing the point here. I actually mentioned: "In
large networks it is disadvantageous to modify IP address – Media
Access Control (MAC) pairs, because there could be certain routers
which do not refresh their arp cache". So I am talking here about a slim possibility of having a router which doesn`t know how to refresh its ARP table. In my opinion this is the safest assumption and this is why the scenario of same IP same MAC is used in the fail-over functionality but this is actually only an example I wanted to present with the hope that the notion will be better understood.
I would now kindly advise you read carefully the attached links since they might offer more information on this subject. Next you can try and figure out how to tackle this task.
I will also present you with some other details, maybe this way the information will become more clear:
Pacemaker - resource manager; starts and stops the services, also contains the logic for ensuring their execution
Heartbeat and Corosync - are both messaging layers; ensuring that the nodes can talk to one another
The idea is that with only Pacemaker your failover functionality might not be complete. Using pacemaker + corosync or pacemaker + heartbeat is your choice.
From what I read the heartbeat solution is actually maintained by Linbit but it has become deprecated so the best option seems to be the first suggested option.
Another useful link: Newbie questions on MAC and high availability failovers
Hope I was able to offer clarifications to my previous email.
Regards,
Alex
On Wednesday, October 7, 2015 4:00 PM, Stefan Sicleru <Stefan.Sicleru@xxxxxxxx> wrote:
Hi Alex, Joe, guys,
I really appreciate your feedback on this. It gives us a solid starting point for our discussion. Let me further
detail the context.
We previously assumed that the clients access the cluster through a router/switch. So clients (that are connected
through the router) won’t need to refresh their ARP tables because the MAC address in only valid within a physical
network segment. So the router will have to refresh its own ARP cache.
And the concern would be that there could be certain routers which do not refresh their ARP cache. Is there a
reason for why should those routers update their ARP cache? The ARP cache is useful only to keep track of the
hardware addresses of devices that do manage the router itself. But these devices (other routers, or even
dedicated servers) are connected through a dedicated connection to the router. Those devices are not part
of the cluster due to several reasons(traffic isolation, traffic congestion, physical deployment, dedicated VLANs
for management, dedicated serial connection through the console port, etc). That is why I stated that updating
ARP caches is a synthetic example, it doesn’t work in practice.
Moreover, since the routers’ management devices are outside of the cluster, traffic coming from the cluster does
not need (or use) the ARP cache in any way because MAC learning process will take care of that (which affects the
MAC table, not the ARP cache). ARP is only required for communicating with the router itself. And there is no device
within the cluster that may want to communicate to the router itself. The cluster only uses the router as a relay device
to reach the clients. And there is no need to change MAC addresses here because the MAC learning process running
on the router will take care of that.
I would like to know if there are other reasons to believe that transferring MAC addresses is beneficial. If so,
is there an open-source clustering framework that provides this behaviour? Or should we try extending pacemaker
with plugins in order to achieve this?
Best regards,
Stefan
From: Alexandru Vaduva [mailto:vaduvajanalexandru@xxxxxxxxx]
Sent: Tuesday, October 06, 2015 8:48 PM
To: Stefan Sicleru <Stefan.Sicleru@xxxxxxxx>; lf_carrier@xxxxxxxxxxxxxxxxxxxxxxxxxx
Cc: Adrian Dudau <Adrian.Dudau@xxxxxxxx>; Razvan Grama <Razvan.Grama@xxxxxxxx>; Cosmin Moldoveanu <Cosmin.Moldoveanu@xxxxxxxx>; Joe MacDonald <joe_macdonald@xxxxxxxxxx>
Subject: Re: [CGL 5.0] [CAF.2.1] [Enea Linux] Ethernet MAC address takeover
Sent: Tuesday, October 06, 2015 8:48 PM
To: Stefan Sicleru <Stefan.Sicleru@xxxxxxxx>; lf_carrier@xxxxxxxxxxxxxxxxxxxxxxxxxx
Cc: Adrian Dudau <Adrian.Dudau@xxxxxxxx>; Razvan Grama <Razvan.Grama@xxxxxxxx>; Cosmin Moldoveanu <Cosmin.Moldoveanu@xxxxxxxx>; Joe MacDonald <joe_macdonald@xxxxxxxxxx>
Subject: Re: [CGL 5.0] [CAF.2.1] [Enea Linux] Ethernet MAC address takeover
Hello guys,
Sorry for the late response. Hope I will be able to offer a correct anwer, maybe Joe will also e able to provide his input here but here
is what I thing of this:
In large networks it is disadvantageous to modify IP address – Media Access Control (MAC) pairs, because there could be certain routers
which do not refresh their arp cache. This could cause problems in the network traffic, the fail-over functionality is realized by taking over the Media Access Control (MAC) address.
In such systems, all nodes use the same fix IP and hardware MAC address in the network and the nodes are differentiated by the state of the servicing interface. The master (active) node has the interface in up state while the slave nodes' interfaces are kept down. If the service is failed over to the other node, the interfaces get into up state. Client requests are serviced by the node having the interface in up state.
Transferring MAC address is beneficial if the resources need to be relocated very quickly, but make sure that having multiple interfaces with the same IP or MAC address connecting to a network can destabilize the network. This makes it highly important to monitor the takeover process and to completely remove ( for example keep powered off) the failed server from the network. You could take a closer look at a STONITH device (heartbeat is a good starting point).
In such systems, all nodes use the same fix IP and hardware MAC address in the network and the nodes are differentiated by the state of the servicing interface. The master (active) node has the interface in up state while the slave nodes' interfaces are kept down. If the service is failed over to the other node, the interfaces get into up state. Client requests are serviced by the node having the interface in up state.
Transferring MAC address is beneficial if the resources need to be relocated very quickly, but make sure that having multiple interfaces with the same IP or MAC address connecting to a network can destabilize the network. This makes it highly important to monitor the takeover process and to completely remove ( for example keep powered off) the failed server from the network. You could take a closer look at a STONITH device (heartbeat is a good starting point).
Useful links:
| ||||||
|
|
|
|
| ||
From Linux-HA Jump to: navigation, search STONITH is a technique for NodeFencing, where the errant node which might have run amok with cluster resources is simply shot in the head.
| ||||||
Preview by Yahoo
| ||||||
| ||||||
| |||||||
|
|
|
|
|
| ||
(and Some Hints for Resource Agent Authors and Systems Engineers) Contents Introduction Causes of STONITH in Heartbeat/Pacemaker Clusters Why Is The Cont...
| |||||||
Preview by Yahoo
| |||||||
| |||||||
Hope I was able to help you :D
Regards,
Alex V.
On Monday, October 5, 2015 4:31 PM, Stefan Sicleru <Stefan.Sicleru@xxxxxxxx> wrote:
Hello,
We (at Enea) are working towards a CGL 5.0 compliant distribution and we have some questions regarding
the requirement specified within the subject.
The MAC address takeover requirement sounds like this:
--
CGL specifies a mechanism to program and announce MAC addresses on Ethernet
interfaces so that when a SW Failure event occurs, redundant nodes may begin
receiving traffic for failed nodes.
interfaces so that when a SW Failure event occurs, redundant nodes may begin
receiving traffic for failed nodes.
--
We’ve accomplished CAF.2.2 requirement (which is the IP address takeover scenario) and we ran into
some issues regarding CAF.2.1. For the IP scenario we have deployed a Pacemaker+Corosync setup
and everything behaved as expected. However, I have not been able to use the same tools for the
Ethernet takeover scenario. To the best of my knowledge, the closest thing Pacemaker offers is to
configure a load-balancing scheme that involves a cluster of nodes answering to the same IP and MAC
address in a round robin fashion. But this is not about having fail-over mechanism for the unicast MAC
addresses (as the CGL requirement specifies), but rather a fail-over mechanism of resources assigned
to multiple machines that share the same multicast MAC address.
Since one request reaches all nodes within the cluster (through the shared multicast MAC), Pacemaker
uses iptables rules on the nodes so that any given packet will be grabbed by exactly one node (through
a hashing policy). This gives us a form of load-balancing. The cluster can be instructed to clone resources
in case of a failure, hence we can achieve a form of a fail-over capability. But then again, this is rather
different from the CGL requirement w.r.t unicast MAC address takeover.
Moreover, if we look over the code of “IPaddr2” Resource Agent, we see that the MAC string (provided as
parameter) is only used for “--clustermac” value of the iptables CLUSTERIP target. There is no other use
for the MAC string provided by IPaddr2. I have not find any resource agent with Ethernet address cloning
capabilities.
I would like to know if the scenario described above is relevant for the requirement. Or should we try
to offer the same fail-over mechanism as we did for the IP takeover scenario? Should we try cloning
the unicast MAC address of the failed interface by using other means? If so, can you give us pointers
to some tools that may be used within a clustering environment?
Aside these, what would be the use cases for this scenario, of having redundancy at MAC level?
The only use case I can think of is when you don’t want cluster’s “clients” (routers, switches, rarely client
machines) to update their own ARP caches (after a successful IP address takeover). But this is only
a synthetic example, I don’t see it as a real-life scenario.
Your feedback is highly appreciated.
Warm regards,
Stefan
_______________________________________________
Lf_carrier mailing list
Lf_carrier@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/lf_carrier
_______________________________________________ Lf_carrier mailing list Lf_carrier@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/lf_carrier