Performance impact of Heterogeneous environment

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi folks.

I had a quick search but found nothing concrete on this so thought I would ask.

We currently have a 4 host CEPH cluster with an NVMe pool (1 OSD per host) and an HDD Pool (1 OSD per host).  Both OSD's use a separate NVMe for DB/WAL. These machines are identical (Homogenous) and are Ryzen 7 5800X machines with 64GB DDR3200 RAM.  The NVMe's are 1TB Seagate Ironwolfs and the HDD's are 16TB Seagate IronWolfs.

We are wanting to add more nodes mainly for capacity and resilience reasons.  We have an old 3 node cluster of Dell R740 servers that could be added to this CEPH cluster.  Instead of DDR4, they use DDR3 (although 1.5TB each!!). and instead of Ryzen 7 5800X CPUs they use  old Intel Xeon CPU E5-4657L v2 (96 cores at 2.4Ghz).

What would be the performance impact of adding these three nodes with the same OSD layout (i.e 1NVMe OSD and 1 HDD OSD per host with 1x NVMe DB/WAL NVMe)
Would we get overall better performance or worse?  Can weighting be used to mitigate performance penalties and if so is this easy to configure?

On performance, I would deem it Ok for our use case currently (VM disks), as we are running on 10Gbe network (with dedicated NICs for public and cluster network).

Many thanks in advance

Tino
This E-mail is intended solely for the person or organisation to which it is addressed. It may contain privileged or confidential information and, if you are not the intended recipient, you must not copy, distribute or take any action in reliance upon it. Any views or opinions presented are solely those of the author and do not necessarily represent those of Marlan Maritime Technologies Ltd. If you have received this E-mail in error, please notify us as soon as possible and delete it from your computer. Marlan Maritime Technologies Ltd Registered in England & Wales 323 Mariners House, Norfolk Street, Liverpool. L1 0BG Company No. 08492427.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux