4MB block size EC-object use case scenario (Mostly for reads , not so much for writes) we saw some benefit separating public/cluster network for 40GbE. We didn’t
use two NIC though. We configured two ports on a NIC. Both network can give up to 48Gb/s but with Mellanox card/Mellanox switch combination it can go up to 56Gb/s.. Thanks & Regards Somnath From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx]
On Behalf Of Brady Deetz TLDR; Has anybody deployed a Ceph cluster using a single 40 gig nic? This is discouraged in http://docs.ceph.com/docs/master/rados/configuration/network-config-ref/
-------------------------------------------------------- Reason for this question: My hope is that I can keep capital expenses down for this year then add a second switch and second 40 gig DAC to each node next year. Thanks for any wisdom you can provide. --------------------------------------------------------- Details: Planned configuration - 40 gig interconnect via Brocade VDX 6940 and 8x OSD nodes configured as follows: 2x E5-2660v4 8x 16GB ECC DDR4 (128 GB RAM) 1x dual port Mellanox ConnectX-3 Pro EN 24x 6TB enterprise sata 2x P3700 400GB pcie nvme (journals) 2x 200GB SSD (OS drive) 1) From a security perspective, why not keep the networks segmented all the way to the node using tagged VLANs or VXLANs then untag them at the node? From a security perspective, that's no different than sending 2 networks to the same host
on different interfaces. 2) By using VLANs, I wouldn't have to worry about the special configuration of Ceph mentioned in referenced documentation, since the untagged VLANs would show up as individual interfaces on the host. 3) From a performance perspective, has anybody observed a significant performance hit by untagging vlans on the node? This is something I can't test, since I don't currently own 40 gig gear. 3.a) If I used a VXLAN offloading nic, wouldn't this remove this potential issue? 3.a) My back of napkin estimate shows that total OSD read throughput per node could max out around 38gbps (4800MB/s). But in reality, with plenty of random I/O, I'm expecting to see something more around 30gbps. So a single 40 gig connection
ought to leave plenty of headroom. right? |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com