Re: ceph-osd iodepth for high-performance SSD OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/26/21 10:22, Frank Schilder wrote:
It looks like the bottleneck is the bstore_kv_sync thread, there seems to be only one running per OSD daemon independent of shard number. This would imply a rather low effective queue depth per OSD daemon. Are there ways to improve this other than deploying even more OSD daemons per OSD?

Regarding num op threads, see Slide 23 of [1]:

• osd_op_num_threads_per_shard * osd_op_num_shards
• Keep at number_of_threads_of_your_cpu_can_handle – async_msgr_op_threads – 3..5 – For example, „osd op num shards = 8”, „osd op num threads per shard = 2” and „ms async op threads = 3” for 22-core CPU with HT/SMT (2*8+3 = 19, 3 threads left for Bluestore, RocksDB, etc.) • Increase in case of slower NVMe to improve IOPS and latency in random reads/writes – Offset NVMe processing time (iowait) by using other thread to do CPU-consuming work in the meantime
• Don’t set too high, or context switches will kill your performance
• Too low values will cause your OSDs to stall
• Change requires OSD restart


Gr. Stefan

[1]: https://static.sched.com/hosted_files/cephalocon2019/d6/ceph%20on%20nvme%20barcelona%202019.pdf
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux