Hi Eric, Our team has developed QOS feature on ceph using the dmclock library from community. We treat a rbd as a dmclock client instead of pool as . We tested our code and the
result is confusing . Testing environment: single server with 16 cores , RAM of 32G, 8 non-systerm disks,each runs one OSD. We set
osd_op_num_threads_per_shard=2,
osd_op_num_shards=5 on each OSD. The Size of our RBD is 100G. The qos params of RBD is
(r:1K, p:100, l:2k), rbd_cache = false , allow_limit_break = false. Conclusion: we only got 1500 IOPS while the system serves much more than the limit value 2000. We used fio and adjusted osd_op_num_shards .And we found that iops runs bigger along as the growth of osd_op_num_shards. Finally it can break the limit value .( fio iops = 2300 while osd_op_num_shards = 20) . So would you mind to provide your test environment and result about dmclock library? And we would be appreciated if you offer some other direction . It will really be a great help for us. Thanks 本邮件及其附件含有新华三技术有限公司的保密信息,仅限于发送给上面地址中列出 的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、 或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本 邮件! This e-mail and its attachments contain confidential information from New H3C, which is intended only for the person or entity whose address is listed above. Any use of the information contained herein in any way (including, but not limited to, total or partial disclosure, reproduction, or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender by phone or email immediately and delete it! |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com