Re: Built-in HA?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Another option is if both RDMA ports are on the same card, then you can do RDMA with a bond. This does not work if you have two separate cards.

As far as your questions go, my guess would be that you would want to have the different NICs in different broadcast domains, or set up Source Based Routing and bind the source port on the connection (not the easiest, but allows you to have multiple NICs in the same broadcast domain). I don't have experience with Ceph in this type of configuration.
----------------
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Fri, Aug 2, 2019 at 9:41 AM Volodymyr Litovka <doka.ua@xxxxxxx> wrote:
Dear colleagues,

at the moment, we use Ceph in routed environment (OSPF, ECMP) and everything is ok, reliability is high and there is nothing to complain about. But for hardware reasons (to be more precise - RDMA offload), we are faced with the need to operate Ceph directly on physical interfaces.

According to documentation, "We generally recommend that dual-NIC systems either be configured with two IPs on the same network, or bonded."

Q1: Did anybody test and can explain, how Ceph will behave in first scenario (two IPs on the same network)? I think this configuration require just one statement in 'public network' (where both interfaces reside)? How it will distribute traffic between links, how it will detect link failures and how it will switchover?

Q2: Did anybody test a bit another scenario - both NICs have addresses in different networks and Ceph configuration contain two 'public networks'? Questions are same - how Ceph distributes traffic between links and how it recovers from link failures?


Thank you.

--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux