I¹ve run the Mellanox 40 gig card. Connectx 3-Pro, but that¹s old now. Back when I ran it, the drivers were kind of a pain to deal with in Ubuntu, primarily during PXE. It should be better now though. If you have the network to support it, 25Gbe is quite a bit cheaper per port, and won¹t be so hard to drive. 40Gbe is very hard to fill. I personally probably would not do 40 again. Warren Wang On 7/13/16, 9:10 AM, "ceph-users on behalf of Götz Reinicke - IT Koordinator" <ceph-users-bounces@xxxxxxxxxxxxxx on behalf of goetz.reinicke@xxxxxxxxxxxxxxx> wrote: >Am 13.07.16 um 14:59 schrieb Joe Landman: >> >> >> On 07/13/2016 08:41 AM, ceph@xxxxxxxxxxxxxx wrote: >>> 40Gbps can be used as 4*10Gbps >>> >>> I guess welcome feedbacks should not be stuck by "usage of a 40Gbps >>> ports", but extented to "usage of more than a single 10Gbps port, eg >>> 20Gbps etc too" >>> >>> Is there people here that are using more than 10G on an ceph server ? >> >> We have built, and are building Ceph units for some of our customers >> with dual 100Gb links. The storage box was one of our all flash >> Unison units for OSDs. Similarly, we have several customers actively >> using multiple 40GbE links on our 60 bay Unison spinning rust disk >> (SRD) box. >> >Now we get closer. Can you tell me which 40G Nic you use? > > /götz > This email and any files transmitted with it are confidential and intended solely for the individual or entity to whom they are addressed. If you have received this email in error destroy it immediately. *** Walmart Confidential *** _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com