Re: Performance in Proof-of-Concept cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Run a close to the metal benchmark on the disks first, just to see the theoretical ceiling.

Also, rerun your benchmarks with random write, just to get more honest numbers as well.

Based on the numbers so far, you seem to be getting 40k client iops @512 threads, due to 3x replication and 3 nodes, this translates 1:1 to 40k per node. So ~10k per SSD. Depending on the benchmark directly on a disk (requested above) this can be either good or bad.

You might want to try 2 ceph-osd processes per SSD, just to see if the Ceph process is the bottleneck.

Hope this gives you food for thought.

On 7/6/22 13:13, Eneko Lacunza wrote:
Hi all,

We have a proof of concept HCI cluster with Proxmox v7 and Ceph v15.

We have 3 nodes:

2x Intel Xeon 5218 Gold (16 core/32 threads per socket)
Dell PERC H330 Controller (SAS3)
4xSamsung PM1634 3.84TB SAS 12Gb SSD
Network is LACP 2x10Gbps

This cluster is used for some VDI tests, with Windows 10 VMs.

Pool has size=3/min=2 and is used for RBD (KVM/QEMU VMs)

We are seeing Ceph performance reaching about 600MiB/s read and 500MiB/s write, and IOPS read about 6.000 and writes about 2.000 . Read/writes are simultaneous (mixed IO), as reported by Ceph.

Is this a reasonable performance for the hardware we have? We see about 25-30% CPU used in the nodes, and ceph-osd processes spiking between 600% and 1000% (I guess it's full 6-10 threads use).

I have checked cache for the disks, but they report cache as "Not applicable".
BIOS power profile is performance and C states are disabled.

Thanks

Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project

Tel. +34 943 569 206 |https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun

https://www.youtube.com/user/CANALBINOVO
https://www.linkedin.com/company/37269706/
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux