Recent ceph.io Performance Blog Posts

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Folks,

I thought I would mention that I've released a couple of performance articles on the Ceph blog recently that might be of interest to people:

1.
   https://ceph.io/en/news/blog/2022/rocksdb-tuning-deep-dive/
   <https://ceph.io/en/news/blog/2022/rocksdb-tuning-deep-dive/>
2.
   https://ceph.io/en/news/blog/2022/qemu-kvm-tuning/
   <https://ceph.io/en/news/blog/2022/qemu-kvm-tuning/>
3.
   https://ceph.io/en/news/blog/2022/ceph-osd-cpu-scaling/
   <https://ceph.io/en/news/blog/2022/ceph-osd-cpu-scaling/>

The first covers RocksDB tuning. How we arrived at our defaults, an analysis of some common settings that have been floating around on the mailing list, and potential new settings that we are considering making default in the future.

The second covers how to tune QEMU/KVM with librbd to achieve high single-client performance on a small (30 OSD) NVMe backed cluster. This article also covers the performance impact when enabling 128bit AES over-the-wire encryption.

The third covers per-OSD CPU/Core scaling and the kind of IOPS/core and IOPS/NVMe numbers that are achievable both on a single OSD and on a larger (60 OSD) NVMe cluster. In this case there are enough clients and a high enough per-client iodepth to saturate the OSD(s).

I hope these are helpful or at least interesting!

Thanks,
Mark

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux