Re: Public and Private network over 1 interface

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



4MB block size EC-object use case scenario (Mostly for reads , not so much for writes) we saw some benefit separating public/cluster network for 40GbE. We didn’t use two NIC though. We configured two ports on a NIC.

Both network can give up to 48Gb/s but with Mellanox card/Mellanox switch combination it can go up to 56Gb/s..

 

Thanks & Regards

Somnath

 

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Brady Deetz
Sent: Monday, May 23, 2016 12:53 PM
To: ceph-users
Subject: [ceph-users] Public and Private network over 1 interface

 

TLDR;

Has anybody deployed a Ceph cluster using a single 40 gig nic? This is discouraged in http://docs.ceph.com/docs/master/rados/configuration/network-config-ref/


"One NIC OSD in a Two Network Cluster:
Generally, we do not recommend deploying an OSD host with a single NIC in a cluster with two networks. --- [cut] --- Additionally, the public network and cluster network must be able to route traffic to each other, which we don’t recommend for security reasons."

 

--------------------------------------------------------

Reason for this question:

My hope is that I can keep capital expenses down for this year then add a second switch and second 40 gig DAC to each node next year.

 

Thanks for any wisdom you can provide.

---------------------------------------------------------

 

Details:

Planned configuration - 40 gig interconnect via Brocade VDX 6940 and 8x OSD nodes configured as follows:

2x E5-2660v4

8x 16GB ECC DDR4 (128 GB RAM)

1x dual port Mellanox ConnectX-3 Pro EN

24x 6TB enterprise sata

2x P3700 400GB pcie nvme (journals)

2x 200GB SSD (OS drive)

 

1) From a security perspective, why not keep the networks segmented all the way to the node using tagged VLANs or VXLANs then untag them at the node? From a security perspective, that's no different than sending 2 networks to the same host on different interfaces.

 

2) By using VLANs, I wouldn't have to worry about the special configuration of Ceph mentioned in referenced documentation, since the untagged VLANs would show up as individual interfaces on the host.

 

3) From a performance perspective, has anybody observed a significant performance hit by untagging vlans on the node? This is something I can't test, since I don't currently own 40 gig gear.

 

3.a) If I used a VXLAN offloading nic, wouldn't this remove this potential issue?

 

3.a) My back of napkin estimate shows that total OSD read throughput per node could max out around 38gbps (4800MB/s). But in reality, with plenty of random I/O, I'm expecting to see something more around 30gbps. So a single 40 gig connection ought to leave plenty of headroom. right?

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux