Public and Private network over 1 interface

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



TLDR;
Has anybody deployed a Ceph cluster using a single 40 gig nic? This is discouraged in http://docs.ceph.com/docs/master/rados/configuration/network-config-ref/

"One NIC OSD in a Two Network Cluster:
Generally, we do not recommend deploying an OSD host with a single NIC in a cluster with two networks. --- [cut] --- Additionally, the public network and cluster network must be able to route traffic to each other, which we don’t recommend for security reasons."

--------------------------------------------------------
Reason for this question:
My hope is that I can keep capital expenses down for this year then add a second switch and second 40 gig DAC to each node next year.

Thanks for any wisdom you can provide.
---------------------------------------------------------

Details:
Planned configuration - 40 gig interconnect via Brocade VDX 6940 and 8x OSD nodes configured as follows:
2x E5-2660v4
8x 16GB ECC DDR4 (128 GB RAM)
1x dual port Mellanox ConnectX-3 Pro EN
24x 6TB enterprise sata
2x P3700 400GB pcie nvme (journals)
2x 200GB SSD (OS drive)

1) From a security perspective, why not keep the networks segmented all the way to the node using tagged VLANs or VXLANs then untag them at the node? From a security perspective, that's no different than sending 2 networks to the same host on different interfaces.

2) By using VLANs, I wouldn't have to worry about the special configuration of Ceph mentioned in referenced documentation, since the untagged VLANs would show up as individual interfaces on the host.

3) >From a performance perspective, has anybody observed a significant performance hit by untagging vlans on the node? This is something I can't test, since I don't currently own 40 gig gear.

3.a) If I used a VXLAN offloading nic, wouldn't this remove this potential issue?

3.a) My back of napkin estimate shows that total OSD read throughput per node could max out around 38gbps (4800MB/s). But in reality, with plenty of random I/O, I'm expecting to see something more around 30gbps. So a single 40 gig connection ought to leave plenty of headroom. right?
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux