On 06/02/2017 03:36 AM, Lijie wrote: > Hi Eric, > > Our team has developed QOS feature on ceph using the dmclock library > from community. We treat a rbd as a dmclock client instead of pool as . > We tested our code and the result is confusing . > > > > Testing environment: single server with 16 cores , RAM of 32G, 8 > non-systerm disks,each runs one OSD. We set > osd_op_num_threads_per_shard=2, osd_op_num_shards=5 on each OSD. The > Size of our RBD is 100G. The qos params of RBD is (r:1K, p:100, l:2k), > rbd_cache = false , allow_limit_break = false. > > > > Conclusion: we only got 1500 IOPS while the system serves much more > than the limit value 2000. > > We used fio and adjusted osd_op_num_shards .And we found that iops runs > bigger along as the growth of osd_op_num_shards. Finally it can break > the limit value .( fio iops = 2300 while osd_op_num_shards = 20) . > > > > So would you mind to provide your test environment and result about > dmclock library? And we would be appreciated if you offer some other > direction . It will really be a great help for us. Are you using src/common/mClockPriorityQueue.h? Because it makes sure allow_limit_break is true and asserts that as long as the dmclock queue is not empty, it will retrieve a request when pull_request is called (assert(pr.is_retn());). If allow_limit_break is false, that assert can fail. I'm guessing you've created your own interface to the dmclock library, because of the issue outlined above. The dmclock library has a simulator built on top of it, which you can use to design various scenarios and see how it works. You can run it, for example, by: $ git clone git@xxxxxxxxxx:ceph/dmclock.git $ cd dmclock/ $ mkdir build $ cd build $ cmake .. $ make dmclock-sims $ sim/dmc_sim -c ../sim/dmc_sim_100th.conf where the file dmc_sim_100th.conf describes the scenario being simulated. I'm not fully clear I understand your approach. If you can describe it in more detail, I'll try to address any questions. Eric -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html