Re: Ceph and multiple RDMA NICs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you for reply and explanation. I will take a look your at  your reference related to ML and
Ceph.

________________________________________
From: David Turner <drakonstein@xxxxxxxxx>
Sent: Friday, March 2, 2018 2:12:18 PM
To: Justinas LINGYS
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  Ceph and multiple RDMA NICs

The only communication on the private network for ceph is between the OSDs for replication, Erasure coding, backfilling, and recovery. Everything else is on the public network. Including communication with clients, mons, MDS, rgw and, literally everything else.

I haven't used RDMA, but from the question of ceph public network vs private network, that is what they do. You can decide if you want to have 2 different subnets for them. There have been some threads on the ML about RDMA and getting it working.

On Fri, Mar 2, 2018, 12:53 AM Justinas LINGYS <jlingys@xxxxxxxxxxxxxx<mailto:jlingys@xxxxxxxxxxxxxx>> wrote:
Hi David,

Thank you for your reply. As I understand your experience with multiple subnets
suggests sticking to a single device. However, I have a powerful RDMA NIC (100Gbps) with two ports and I have seen recommendations from Mellanox to separate the
two networks. Also, I am planning on having quite a lot of traffic on my private network since it's for a research project which uses machine learning and it stores a lot of data in a Ceph cluster. Considering my case, I assume it is worth the pain separating the two networks to get best out the advanced NIC.

Justin

________________________________________
From: David Turner <drakonstein@xxxxxxxxx<mailto:drakonstein@xxxxxxxxx>>
Sent: Thursday, March 1, 2018 9:57:50 PM
To: Justinas LINGYS
Cc: ceph-users@xxxxxxxxxxxxxx<mailto:ceph-users@xxxxxxxxxxxxxx>
Subject: Re:  Ceph and multiple RDMA NICs

There has been some chatter on the ML questioning the need to separate out the public and private subnets for Ceph. The trend seems to be in simplifying your configuration which for some is not specifying multiple subnets here.  I haven't heard of anyone complaining about network problems with putting private and public on the same subnets, but I have seen a lot of people with networking problems by splitting them up.

Personally I use vlans for the 2 on the same interface at home and I have 4 port 10Gb nics at the office, so we split that up as well, but even there we might be better suited with bonding all 4 together and using a vlan to split traffic.  I wouldn't merge them together since we have graphing on our storage nodes for public and private networks.

But the take-away is that if it's too hard to split your public and private subnets... don't.  I doubt you would notice any difference if you were to get it working vs just not doing it.

On Thu, Mar 1, 2018 at 3:24 AM Justinas LINGYS <jlingys@xxxxxxxxxxxxxx<mailto:jlingys@xxxxxxxxxxxxxx><mailto:jlingys@xxxxxxxxxxxxxx<mailto:jlingys@xxxxxxxxxxxxxx>>> wrote:
Hi all,

I am running a small Ceph cluster  (1 MON and 3OSDs), and it works fine.
However, I have a doubt about the two networks (public and cluster) that an OSD uses.
There is a reference from Mellanox (https://community.mellanox.com/docs/DOC-2721) how to configure 'ceph.conf'. However, after reading the source code (luminous-stable), I get a feeling that we cannot run Ceph with two NICs/Ports as we only have one 'ms_async_rdma_local_gid' per OSD, and it seems that the source code only uses one option (NIC). I would like to ask how I could communicate with the public network via one RDMA NIC and communicate  with the cluster network via another RDMA NIC (apply RoCEV2 to both NICs). Since gids are unique within a machine, how can I use two different gids in 'ceph.conf'?

Justin
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx<mailto:ceph-users@xxxxxxxxxxxxxx><mailto:ceph-users@xxxxxxxxxxxxxx<mailto:ceph-users@xxxxxxxxxxxxxx>>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux