On 10/09/2020 22:35, vitalif@xxxxxxxxxx wrote:
Hi George
Author of Ceph_performance here! :)
I suspect you're running tests with 1 PG. Every PG's requests are always serialized, that's why OSD doesn't utilize all threads with 1 PG. You need something like 8 PGs per OSD. More than 8 usually doesn't improve results.
Also note that read tests are meaningless after full overwrite on small OSDs because everything fits in cache. Restart the OSD to clear it. You can drop the cache via the admin socket too, but restarting is the simplest way.
I've repeated your test with brd. My results with 8 PGs after filling the RBD image, turning CPU powersave off and restarting the OSD are:
# fio -name=test -ioengine=rbd -bs=4k -iodepth=1 -rw=randread -pool=ramdisk -rbdname=testimg
read: IOPS=3586, BW=14.0MiB/s (14.7MB/s)(411MiB/29315msec)
lat (usec): min=182, max=5710, avg=277.41, stdev=90.16
# fio -name=test -ioengine=rbd -bs=4k -iodepth=1 -rw=randwrite -pool=ramdisk -rbdname=testimg
write: IOPS=1247, BW=4991KiB/s (5111kB/s)(67.0MiB/13746msec); 0 zone resets
lat (usec): min=555, max=4015, avg=799.45, stdev=142.92
# fio -name=test -ioengine=rbd -bs=4k -iodepth=128 -rw=randwrite -pool=ramdisk -rbdname=testimg
write: IOPS=4138, BW=16.2MiB/s (16.9MB/s)(282MiB/17451msec); 0 zone resets
658% CPU
# fio -name=test -ioengine=rbd -bs=4k -iodepth=128 -rw=randread -pool=ramdisk -rbdname=testimg
read: IOPS=15.7k, BW=61.4MiB/s (64.4MB/s)(979MiB/15933msec)
540% CPU
Basically the same shit as on an NVMe. So even an "in-memory Ceph" is slow, haha.
Hello!
Thank you for feedback! PG idea is really good. Unfortunately, autoscale
made it to 32, and I have 30 kIOPS of 32-pg 1-size pool on ramdisk. :-/
I've checked read speed (I hadn't done this before, I have no idea why),
and I got amazing 160kIOPS, but I suspect it's caching.
Anyway, thank you for data, I assume 600% CPU in exchange for
~16-17kIOPS for OSD.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx