Re: Ceph and multiple RDMA NICs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



There has been some chatter on the ML questioning the need to separate out the public and private subnets for Ceph. The trend seems to be in simplifying your configuration which for some is not specifying multiple subnets here.  I haven't heard of anyone complaining about network problems with putting private and public on the same subnets, but I have seen a lot of people with networking problems by splitting them up.

Personally I use vlans for the 2 on the same interface at home and I have 4 port 10Gb nics at the office, so we split that up as well, but even there we might be better suited with bonding all 4 together and using a vlan to split traffic.  I wouldn't merge them together since we have graphing on our storage nodes for public and private networks.

But the take-away is that if it's too hard to split your public and private subnets... don't.  I doubt you would notice any difference if you were to get it working vs just not doing it.

On Thu, Mar 1, 2018 at 3:24 AM Justinas LINGYS <jlingys@xxxxxxxxxxxxxx> wrote:
Hi all,

I am running a small Ceph cluster  (1 MON and 3OSDs), and it works fine.
However, I have a doubt about the two networks (public and cluster) that an OSD uses.
There is a reference from Mellanox (https://community.mellanox.com/docs/DOC-2721) how to configure 'ceph.conf'. However, after reading the source code (luminous-stable), I get a feeling that we cannot run Ceph with two NICs/Ports as we only have one 'ms_async_rdma_local_gid' per OSD, and it seems that the source code only uses one option (NIC). I would like to ask how I could communicate with the public network via one RDMA NIC and communicate  with the cluster network via another RDMA NIC (apply RoCEV2 to both NICs). Since gids are unique within a machine, how can I use two different gids in 'ceph.conf'?

Justin
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux