We have a small ceph cluster running built from components that were phased out from compute applications. the current cluster consists of i7-860s. 6 disks (5TB, 7200RPM) per node and 8 nodes totaling 48 OSDs. A compute cluster will be discontinued, which will make Ryzen 5-1600 hardware available (8 nodes with 16GB RAM each) with which to replace the CPUs of the current setup. How could we best distribute the OSDs (keeping existing disks for storage) to the Ryzen systems to get a good performance improvement? Unfortunately the interconnect is still only 1Gb/s so expected to be a limiting factor. Would it make sense to create fewer bigger nodes, e.g., to use 6 nodes with 8 disks each or even more condensed? We would like to move the Luminous cluster to Nautilus/Bluestore and can get SSDs for each of the nodes, as it appears to be essential to get performance. Can actually benefit from improvements in the OSDs if the network is so limited? Would bonding of network interfaces be a workaround until we can get a network update or are we overestimating the power of the upgraded OSD nodes? What strategy would you suggest with these resources? Any comments and suggestions would be highly welcome :) Thanks in advance Philipp
Attachment:
signature.asc
Description: OpenPGP digital signature
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx