Re: Micron SSD/Basic Config

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jan 31, 2020 at 2:06 PM EDH - Manuel Rios
<mriosfer@xxxxxxxxxxxxxxxx> wrote:
>
> Hmm change 40Gbps to 100Gbps networking.
>
> 40Gbps technology its just a bond of 4x10 Links with some latency due link aggregation.
> 100 Gbps and 25Gbps got less latency and Good performance. In ceph a 50% of the latency comes from Network commits and the other 50% from disk commits.

40G ethernet is not the same as 4x 10G bond. A bond load balances on a
per-packet (or well, per flow usually) basis. A 40G link uses all four
links even for a single packet.
100G is "just" 4x 25G

I also wouldn't agree that network and disk latency is a 50/50 split
in Ceph unless you have some NVRAM disks or something.

Even for the network speed the processing and queuing in the network
stack dominates over the serialization delay from a 40G/100G
difference (4kb at 100G is 320ns, and 800ns at 40G for the
serialization; I don't have any figures for processing times on
40/100G ethernet, but 10G fiber is at 300ns, 10G base-t at 2300
nanoseconds)

Paul


>
> A fast graph : https://blog.mellanox.com/wp-content/uploads/John-Kim-030416-Fig-3a-1024x747.jpg
> Article: https://blog.mellanox.com/2016/03/25-is-the-new-10-50-is-the-new-40-100-is-the-new-amazing/
>
> Micron got their own Whitepaper for CEPH and looks like performs fine.
> https://www.micron.com/-/media/client/global/documents/products/other-documents/micron_9200_max_ceph_12,-d-,2,-d-,8_luminous_bluestore_reference_architecture.pdf?la=en
>
>
> AS your Budget is high, please buy 3 x 1.5K $ nodes for your monitors and you Will sleep better. They just need 4 cores / 16GB RAM and 2x128GB SSD or NVME M2 .
>
> -----Mensaje original-----
> De: Adam Boyhan <adamb@xxxxxxxxxx>
> Enviado el: viernes, 31 de enero de 2020 13:59
> Para: ceph-users <ceph-users@xxxxxxx>
> Asunto:  Micron SSD/Basic Config
>
> Looking to role out a all flash Ceph cluster. Wanted to see if anyone else was using Micron drives along with some basic input on my design so far?
>
> Basic Config
> Ceph OSD Nodes
> 8x Supermicro A+ Server 2113S-WTRT
> - AMD EPYC 7601 32 Core 2.2Ghz
> - 256G Ram
> - AOC-S3008L-L8e HBA
> - 10GB SFP+ for client network
> - 40GB QSFP+ for ceph cluster network
>
> OSD
> 10x Micron 5300 PRO 7.68TB in each ceph node
> - 80 total drives across the 8 nodes
>
> WAL/DB
> 5x Micron 7300 MAX NVMe 800GB per Ceph Node
> - Plan on dedicating 1 for each 2 OSD's
>
> Still thinking out a external monitor node as I have a lot of options, but this is a pretty good start. Open to suggestions as well!
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux