Hi!
Götz Reinicke wrotes:
>We have successfully used multiple bonding interfaces, which work correctly with
>high speed NICs, at least in Ubuntu with 4.x kernels. In combination with MLAG
>(multiple seitch chassis link aggregation) this provides at least good physical redundancy.
>I expect to be able for this to improve further with Software Defined Networking solutions
>getting more popular and making it easier to create such redundant setups.
For physical redundancy in our small installation we use a simple setup - 2x10Gbit + 2x1Gbit.
A pair of 10Gbit and 1Gbit interfaces is configured in a failover bridge with 10Gbit being primary
interface and conneced to a physically different non-MLAG switches. The setup is quite simple
and uses the pretty standard 'ifenslave' We share the front/back networks in ceph, so the second
pair of intefaces is unused now. The main benefit of this scheme is to get full 2x physical failover,
including links and switches. Yes, the perfomance drops slightly, when node goes to 1Gbit,
but whole cluster is health and safe. The failover is fully automatical, there are no disconnects
we observed in ceph's behaviour and after repairing 10Gbit link, traffic is also automatically
goes back to a higher speed interface.
Megov Igor
CIO, Yuterra
Götz Reinicke wrotes:
>>What if one of the networks fail? e.g. just on one host or the whole
>>network for all nodes?
>>Is there some sort of auto failover to use the other network for alltraffic than?
>>How dose that work in real life? :) Or do I have to interact by hand
Alex Gorbachev wrotes:
>>Is there some sort of auto failover to use the other network for alltraffic than?
>>How dose that work in real life? :) Or do I have to interact by hand
Alex Gorbachev wrotes:
>high speed NICs, at least in Ubuntu with 4.x kernels. In combination with MLAG
>(multiple seitch chassis link aggregation) this provides at least good physical redundancy.
>I expect to be able for this to improve further with Software Defined Networking solutions
>getting more popular and making it easier to create such redundant setups.
>We have not delved into layer 3 solutions, such as OSPF, but these should be helpful
>as well to add robustness to the Ceph networking backend.
>as well to add robustness to the Ceph networking backend.
For physical redundancy in our small installation we use a simple setup - 2x10Gbit + 2x1Gbit.
A pair of 10Gbit and 1Gbit interfaces is configured in a failover bridge with 10Gbit being primary
interface and conneced to a physically different non-MLAG switches. The setup is quite simple
and uses the pretty standard 'ifenslave' We share the front/back networks in ceph, so the second
pair of intefaces is unused now. The main benefit of this scheme is to get full 2x physical failover,
including links and switches. Yes, the perfomance drops slightly, when node goes to 1Gbit,
but whole cluster is health and safe. The failover is fully automatical, there are no disconnects
we observed in ceph's behaviour and after repairing 10Gbit link, traffic is also automatically
goes back to a higher speed interface.
Megov Igor
CIO, Yuterra
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com