I think 800 GB NVMe per 2 SSDs is an overkill. 1 OSD usually only
requires 30 GB block.db, so 400 GB per an OSD is a lot. On the other
hand, does 7300 have twice the iops of 5300? In fact, I'm not sure if a
7300 + 5300 OSD will perform better than just a 5300 OSD at all.
It would be interesting if you could benchmark & compare it though :)
Hmm change 40Gbps to 100Gbps networking.
40Gbps technology its just a bond of 4x10 Links with some latency due
link aggregation.
100 Gbps and 25Gbps got less latency and Good performance. In ceph a
50% of the latency comes from Network commits and the other 50% from
disk commits.
A fast graph :
https://blog.mellanox.com/wp-content/uploads/John-Kim-030416-Fig-3a-1024x747.jpg
Article:
https://blog.mellanox.com/2016/03/25-is-the-new-10-50-is-the-new-40-100-is-the-new-amazing/
Micron got their own Whitepaper for CEPH and looks like performs fine.
https://www.micron.com/-/media/client/global/documents/products/other-documents/micron_9200_max_ceph_12,-d-,2,-d-,8_luminous_bluestore_reference_architecture.pdf?la=en
AS your Budget is high, please buy 3 x 1.5K $ nodes for your monitors
and you Will sleep better. They just need 4 cores / 16GB RAM and
2x128GB SSD or NVME M2 .
-----Mensaje original-----
De: Adam Boyhan <adamb@xxxxxxxxxx>
Enviado el: viernes, 31 de enero de 2020 13:59
Para: ceph-users <ceph-users@xxxxxxx>
Asunto: Micron SSD/Basic Config
Looking to role out a all flash Ceph cluster. Wanted to see if anyone
else was using Micron drives along with some basic input on my design
so far?
Basic Config
Ceph OSD Nodes
8x Supermicro A+ Server 2113S-WTRT
- AMD EPYC 7601 32 Core 2.2Ghz
- 256G Ram
- AOC-S3008L-L8e HBA
- 10GB SFP+ for client network
- 40GB QSFP+ for ceph cluster network
OSD
10x Micron 5300 PRO 7.68TB in each ceph node
- 80 total drives across the 8 nodes
WAL/DB
5x Micron 7300 MAX NVMe 800GB per Ceph Node
- Plan on dedicating 1 for each 2 OSD's
Still thinking out a external monitor node as I have a lot of options,
but this is a pretty good start. Open to suggestions as well!
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx