Re: 40Gb fileserver/NIC suggestions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



My OSDs have dual 40G NICs.  I typically don't use more than 1Gbps on either network. During heavy recovery activity (like if I lose a whole server), I've seen up to 12Gbps on the cluster network.

For reference my cluster is 9 OSD nodes with 9x 7200RPM 2TB OSDs. They all have RAID cards with 4GB of RAM and a BBU. The disks are in single disk RAID 1 to make use of the card's WB cache. 

I can imagine with more servers, the peak recovery BW usage may go up even more, to the max write rate to the RAID card's cache. 

Jake



On Wednesday, July 13, 2016, <ceph@xxxxxxxxxxxxxx> wrote:
40Gbps can be used as 4*10Gbps

I guess welcome feedbacks should not be stuck by "usage of a 40Gbps
ports", but extented to "usage of more than a single 10Gbps port, eg
20Gbps etc too"

Is there people here that are using more than 10G on an ceph server ?

On 13/07/2016 14:27, Wido den Hollander wrote:
>
>> Op 13 juli 2016 om 12:00 schreef Götz Reinicke - IT Koordinator <goetz.reinicke@xxxxxxxxxxxxxxx>:
>>
>>
>> Am 13.07.16 um 11:47 schrieb Wido den Hollander:
>>>> Op 13 juli 2016 om 8:19 schreef Götz Reinicke - IT Koordinator <goetz.reinicke@xxxxxxxxxxxxxxx>:
>>>>
>>>>
>>>> Hi,
>>>>
>>>> can anybody give some realworld feedback on what hardware
>>>> (CPU/Cores/NIC) you use for a 40Gb (file)server (smb and nfs)? The Ceph
>>>> Cluster will be mostly rbd images. S3 in the future, CephFS we will see :)
>>>>
>>>> Thanks for some feedback and hints! Regadrs . Götz
>>>>
>>> Why do you think you need 40Gb? That's some serious traffic to the OSDs and I doubt it's really needed.
>>>
>>> Latency-wise 40Gb isn't much better than 10Gb, so why not stick with that?
>>>
>>> It's also better to have more smaller nodes than a few big nodes with Ceph.
>>>
>>> Wido
>>>
>> Hi Wido,
>>
>> may be my post was misleading. The OSD Nodes do have 10G, the FIleserver
>> in front to the Clients/Destops should have 40G.
>>
>
> Ah, the fileserver will re-export RBD/Samba? Any Xeon E5 CPU will do just fine I think.
>
> Still, 40GbE is a lot of bandwidth!
>
> Wido
>
>>
>> OSD NODEs/Cluster 2*10Gb Bond ---- 40G Fileserver 40G ---- 1G/10G Clients
>>
>>     /Götz
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux