Performance (RBD) regression after upgrading beyond v15.2.8

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

While doing some benchmarks I have two identical Ceph clusters:

3x SuperMicro 1U
AMD Epyc 7302P 16C
256GB DDR
4x Samsung PM983 1,92TB
100Gbit networking

I tested on such a setup with v16.2.4 with fio:

bs=4k
qd=1

IOps: 695

That was very low as I was expecting at least >1000 IOps.

I checked with the second Ceph cluster which was still running v15.2.8, the result: 1364 IOps.

I then upgraded from 15.2.8 to 15.2.13: 725 IOps

Looking at the differences between v15.2.8 and v15.2.8 of options.cc I saw these options:

bluefs_buffered_io: false -> true
bluestore_cache_trim_max_skip_pinned: 1000 -> 64

The main difference seems to be 'bluefs_buffered_io', but in both cases this was already explicitly set to 'true'.

So anything beyond 15.2.8 is right now giving me a much lower I/O performance with Queue Depth = 1 and Block Size = 4k.

15.2.8: 1364 IOps
15.2.13: 725 IOps
16.2.4: 695 IOps

Has anybody else seen this as well? I'm trying to figure out where this is going wrong.

Wido
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux