ceph-osd iodepth for high-performance SSD OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

we deployed a pool with high-performance SSDs and I'm testing aggregated performance. We seem to hit a bottleneck that is not caused by drive performance. My best guess at the moment is, that the effective iodepth of the OSD daemons is too low for these drives. I have 4 OSDs per drive and I vaguely remember that there are parameters to modify the degree of concurrency an OSD daemon uses to write to disk. Are these parameters the ones I'm looking for:

    "osd_op_num_shards": "0",
    "osd_op_num_shards_hdd": "5",
    "osd_op_num_shards_ssd": "8",
    "osd_op_num_threads_per_shard": "0",
    "osd_op_num_threads_per_shard_hdd": "1",
    "osd_op_num_threads_per_shard_ssd": "2",

How do these apply if I have these drives in a custom device class rbd_perf? Could I set, for example

ceph config set osd/class:rbd_perf osd_op_num_threads_per_shard 4

to increase concurrency on this particular device class only? Is it possible to increase the number of shards at run-time?

Thanks for your help!

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux