40Gbps can be used as 4*10Gbps I guess welcome feedbacks should not be stuck by "usage of a 40Gbps ports", but extented to "usage of more than a single 10Gbps port, eg 20Gbps etc too" Is there people here that are using more than 10G on an ceph server ? On 13/07/2016 14:27, Wido den Hollander wrote: > >> Op 13 juli 2016 om 12:00 schreef Götz Reinicke - IT Koordinator <goetz.reinicke@xxxxxxxxxxxxxxx>: >> >> >> Am 13.07.16 um 11:47 schrieb Wido den Hollander: >>>> Op 13 juli 2016 om 8:19 schreef Götz Reinicke - IT Koordinator <goetz.reinicke@xxxxxxxxxxxxxxx>: >>>> >>>> >>>> Hi, >>>> >>>> can anybody give some realworld feedback on what hardware >>>> (CPU/Cores/NIC) you use for a 40Gb (file)server (smb and nfs)? The Ceph >>>> Cluster will be mostly rbd images. S3 in the future, CephFS we will see :) >>>> >>>> Thanks for some feedback and hints! Regadrs . Götz >>>> >>> Why do you think you need 40Gb? That's some serious traffic to the OSDs and I doubt it's really needed. >>> >>> Latency-wise 40Gb isn't much better than 10Gb, so why not stick with that? >>> >>> It's also better to have more smaller nodes than a few big nodes with Ceph. >>> >>> Wido >>> >> Hi Wido, >> >> may be my post was misleading. The OSD Nodes do have 10G, the FIleserver >> in front to the Clients/Destops should have 40G. >> > > Ah, the fileserver will re-export RBD/Samba? Any Xeon E5 CPU will do just fine I think. > > Still, 40GbE is a lot of bandwidth! > > Wido > >> >> OSD NODEs/Cluster 2*10Gb Bond ---- 40G Fileserver 40G ---- 1G/10G Clients >> >> /Götz >> >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com