[no subject]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hardware setup
--------------
3x backend servers
CPU: 2x AMD EPYC 7402 24-Core (48c+48t)
Storage: 24x NVMe
Network: 40gbps
OS: Ubuntu Focal
Kernel: 5.15.0-18-generic

4x client servers
CPU: 2x AMD EPYC 7402 24-Core (48c+48t)
Network: 40gbps
OS: Ubuntu Focal
Kernel: 5.11.0-37-generic

Software config
---------------
72 OSDs in total (24 OSDs per host)
1 OSD per NVMe drive
Each OSD runs in LXD container
Scrub disabled
Deep-scrub disabled
Ceph balancer off
1 pool 'rbd':
- 1024 PG
- PG autoscaler off

Test environment
----------------
- 128 rbd images (default features, size 128GB)
- All the images are fully written before any tests are done! (4194909 objects allocated)
- client version ceph 16.2.7 vanila eu.ceph.com
- Each client runs fio with rbd engine (librbd) against 32 rbd images (4x32 in total)


Tests
----------------
qd - queue depth (number of IOs issued simultaneously to single RBD image)

IOPS tests
==========
- random IO 4k, 4qd
- random IO 4k, 64qd

Write			4k 4qd	4k 64qd
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
14.2.16			69630	132093
14.2.22			97491	156288
15.2.14			77586	93003
*15.2.14 ? canonical	110424	168943
16.2.0			70526	85827
16.2.2			69897	85231
16.2.4			64713	84046
16.2.5			62099	85053
16.2.6			68394	83070
16.2.7			66974	78601

		
Read			4k 4qd	4k 64qd
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
14.2.16			692848	816109
14.2.22			693027	830485
15.2.14			676784	702233
*15.2.14 ? canonical	749404	792385
16.2.0			610798	636195
16.2.2			606924	637611
16.2.4			611093	630590
16.2.5			603162	632599
16.2.6			603013	627246
16.2.7			-	-

* Very oddly the best perf was achieved with build Ceph 15.2.14 from canonical 15.2.14-0ubuntu0.20.04.2
14.2.22 performs very well
15.2.14 from canonical is the best in terms of writes.
16.2.x series writes are quite poor comparing to other versions.



BW tests
========
- sequential IO 64k, 64qd

These results are mostly the same for all ceph versions.
Writes ~4.2 GB/s
Reads ~12 GB/s

Seems that results here are limited by network bandwitdh.


Questions
---------
Is there any reason for the performance drop in 16.x series?
I'm looking for some help/recommendations to get as much IOPS as possible (especially for writes, as reads are good enough)

We've been trying to find out what makes the difference in canonical builds. A few leads indicates that
extraopts += -DCMAKE_BUILD_TYPE=RelWithDebInfo was not set for builds from ceph foundation 
https://github.com/ceph/ceph/blob/master/do_cmake.sh#L86
How to check this, would someone be able to take a look there?

BR
BR
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx




[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux