Re: ceph-osd iodepth for high-performance SSD OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It looks like the bottleneck is the bstore_kv_sync thread, there seems to be only one running per OSD daemon independent of shard number. This would imply a rather low effective queue depth per OSD daemon. Are there ways to improve this other than deploying even more OSD daemons per OSD?

Thanks!
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Frank Schilder <frans@xxxxxx>
Sent: 26 October 2021 09:41:44
To: ceph-users
Subject:  ceph-osd iodepth for high-performance SSD OSDs

Hi all,

we deployed a pool with high-performance SSDs and I'm testing aggregated performance. We seem to hit a bottleneck that is not caused by drive performance. My best guess at the moment is, that the effective iodepth of the OSD daemons is too low for these drives. I have 4 OSDs per drive and I vaguely remember that there are parameters to modify the degree of concurrency an OSD daemon uses to write to disk. Are these parameters the ones I'm looking for:

    "osd_op_num_shards": "0",
    "osd_op_num_shards_hdd": "5",
    "osd_op_num_shards_ssd": "8",
    "osd_op_num_threads_per_shard": "0",
    "osd_op_num_threads_per_shard_hdd": "1",
    "osd_op_num_threads_per_shard_ssd": "2",

How do these apply if I have these drives in a custom device class rbd_perf? Could I set, for example

ceph config set osd/class:rbd_perf osd_op_num_threads_per_shard 4

to increase concurrency on this particular device class only? Is it possible to increase the number of shards at run-time?

Thanks for your help!

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux