Re: ceph-osd iodepth for high-performance SSD OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Stefan,

thanks a lot for this information. I increased osd_op_num_threads_per_shard with little effect (I did restart and checked with config show that the value is applied). I'm afraid I'm bound by the bstore_kv_sync as explained in an this thread discussing this in great detail (http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-March/033522.html). There was also a discussion of giving every shard its own kv store to increase concurrency on the kv store(s), but it seems not implemented - at least not in mimic. I'm afraid with my disks I get an effective queue depth of 1-2 per active bstore_kv_sync thread (meaning: per OSD daemon), which more or less matches the aggregated performance I see.

Thanks and best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Stefan Kooman <stefan@xxxxxx>
Sent: 26 October 2021 11:14:28
To: Frank Schilder; ceph-users
Subject: Re:  Re: ceph-osd iodepth for high-performance SSD OSDs

On 10/26/21 10:22, Frank Schilder wrote:
> It looks like the bottleneck is the bstore_kv_sync thread, there seems to be only one running per OSD daemon independent of shard number. This would imply a rather low effective queue depth per OSD daemon. Are there ways to improve this other than deploying even more OSD daemons per OSD?

Regarding num op threads, see Slide 23 of [1]:

• osd_op_num_threads_per_shard * osd_op_num_shards
• Keep at number_of_threads_of_your_cpu_can_handle –
async_msgr_op_threads – 3..5
– For example, „osd op num shards = 8”, „osd op num threads per shard =
2” and „ms async op threads = 3” for
22-core CPU with HT/SMT (2*8+3 = 19, 3 threads left for Bluestore,
RocksDB, etc.)
• Increase in case of slower NVMe to improve IOPS and latency in random
reads/writes
– Offset NVMe processing time (iowait) by using other thread to do
CPU-consuming work in the meantime
• Don’t set too high, or context switches will kill your performance
• Too low values will cause your OSDs to stall
• Change requires OSD restart


Gr. Stefan

[1]:
https://static.sched.com/hosted_files/cephalocon2019/d6/ceph%20on%20nvme%20barcelona%202019.pdf
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux