Re: Micron SSD/Basic Config

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Appreciate the input. 

Looking at those articles they make me feel like the 40G they are talking about is 4x Bonded 10G connections. 

Im looking at 40Gbps without bonding for throughput. Is that still the same? 

[ https://www.fs.com/products/29126.html | https://www.fs.com/products/29126.html ] 

Yep most of this is based on the white paper with a few changes here and there. 



From: "EDH - Manuel Rios" <mriosfer@xxxxxxxxxxxxxxxx> 
To: "adamb" <adamb@xxxxxxxxxx>, "ceph-users" <ceph-users@xxxxxxx> 
Sent: Friday, January 31, 2020 8:05:52 AM 
Subject: RE: Micron SSD/Basic Config 

Hmm change 40Gbps to 100Gbps networking. 

40Gbps technology its just a bond of 4x10 Links with some latency due link aggregation. 
100 Gbps and 25Gbps got less latency and Good performance. In ceph a 50% of the latency comes from Network commits and the other 50% from disk commits. 

A fast graph : https://blog.mellanox.com/wp-content/uploads/John-Kim-030416-Fig-3a-1024x747.jpg 
Article: https://blog.mellanox.com/2016/03/25-is-the-new-10-50-is-the-new-40-100-is-the-new-amazing/ 

Micron got their own Whitepaper for CEPH and looks like performs fine. 
https://www.micron.com/-/media/client/global/documents/products/other-documents/micron_9200_max_ceph_12,-d-,2,-d-,8_luminous_bluestore_reference_architecture.pdf?la=en 


AS your Budget is high, please buy 3 x 1.5K $ nodes for your monitors and you Will sleep better. They just need 4 cores / 16GB RAM and 2x128GB SSD or NVME M2 . 
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux