Hello Adam, Can you describe what performance values you want to gain out of your cluster? What's the use case? EC oder Replica? In general, more disks are preferred over bigger ones. As Micron has not provided us with demo hardware, we can't say how fast these disks are in reality. Before I think about 40 vs 25/50/100 GbE, I would reduce latency of these disks. -- Martin Verges Managing director Hint: Secure one of the last slots in the upcoming 4-day Ceph Intensive Training at https://croit.io/training/4-days-ceph-in-depth-training. Mobile: +49 174 9335695 E-Mail: martin.verges@xxxxxxxx Chat: https://t.me/MartinVerges croit GmbH, Freseniusstr. 31h, 81247 Munich CEO: Martin Verges - VAT-ID: DE310638492 Com. register: Amtsgericht Munich HRB 231263 Web: https://croit.io YouTube: https://goo.gl/PGE1Bx Am Fr., 31. Jan. 2020 um 13:58 Uhr schrieb Adam Boyhan <adamb@xxxxxxxxxx>: > Looking to role out a all flash Ceph cluster. Wanted to see if anyone else > was using Micron drives along with some basic input on my design so far? > > Basic Config > Ceph OSD Nodes > 8x Supermicro A+ Server 2113S-WTRT > - AMD EPYC 7601 32 Core 2.2Ghz > - 256G Ram > - AOC-S3008L-L8e HBA > - 10GB SFP+ for client network > - 40GB QSFP+ for ceph cluster network > > OSD > 10x Micron 5300 PRO 7.68TB in each ceph node > - 80 total drives across the 8 nodes > > WAL/DB > 5x Micron 7300 MAX NVMe 800GB per Ceph Node > - Plan on dedicating 1 for each 2 OSD's > > Still thinking out a external monitor node as I have a lot of options, but > this is a pretty good start. Open to suggestions as well! > > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx