Performance Discrepancy Between rbd bench and fio on Ceph RBD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

I have a question regarding the performance differences between rbd
bench and a fio test on a Ceph RBD.
During testing of my Ceph environment, I noticed that rbd bench
reports significantly higher write speeds compared to fio,
which is run on an XFS filesystem mounted on the same RBD.

Here are the test details:

fio test:
------------
fio --name=ceph_bench --rw=write --bs=4M --direct=1 --ioengine=libaio
--size=4G --numjobs=1 --filename=/mnt/ceph10/backup-cluster1/testfile

Results:
    Write speed: ~70.3 MiB/s (73.7 MB/s)
    IOPS: ~17
    Latency: ~56.4 ms (average)

rbd bench:
--------------
rbd bench --io-type write --io-size 4M --io-threads 16 --io-total 12G
backup-proxmox/cluster1-new --keyring
/etc/ceph/ceph10/ceph.client.admin.keyring --conf
/etc/ceph/ceph10/ceph.conf

Results:
    Write speed: ~150–180 MiB/s (sustained after initial peak)
    IOPS: ~43–46
    Threads: 16

ceph tell osd.X bench:
---------------------------
osd.0: IOPS: 20.03, Throughput: 80.11 MB/s
osd.1: IOPS: 23.08, Throughput: 92.33 MB/s
osd.2: IOPS: 22.01, Throughput: 88.02 MB/s
osd.3: IOPS: 17.65, Throughput: 70.61 MB/s
osd.4: IOPS: 20.15, Throughput: 80.59 MB/s
osd.5: IOPS: 19.71, Throughput: 78.82 MB/s
osd.6: IOPS: 18.96, Throughput: 75.84 MB/s
osd.7: IOPS: 20.13, Throughput: 80.52 MB/s
osd.8: IOPS: 16.45, Throughput: 65.79 MB/s (hm)
osd.9: IOPS: 31.16, Throughput: 124.65 MB/s
osd.10: IOPS: 23.19, Throughput: 92.76 MB/s
osd.12: IOPS: 19.10, Throughput: 76.38 MB/s
osd.13: IOPS: 28.02, Throughput: 112.09 MB/s
osd.14: IOPS: 19.07, Throughput: 76.27 MB/s
osd.15: IOPS: 21.40, Throughput: 85.59 MB/s
osd.16: IOPS: 20.24, Throughput: 80.95 MB/s


I understand that rbd bench is designed to test the raw performance of
the RBD layer, while fio runs at the filesystem level. However, the
large discrepancy between the two results has left me curious. What
could be causing this difference? Are there any specific factors
related to caching, journaling, or XFS itself that could explain the
significantly lower performance in the fio test?

Any insights or recommendations for further troubleshooting would be
greatly appreciated!



Thanks in advance!
Best regards,
Mario
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux