Re: How best to integrate dmClock QoS library into ceph codebase

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sheng,

I’ll interleave responses below.

> On Jul 11, 2017, at 2:14 PM, sheng qiu <herbert1984106@xxxxxxxxx> wrote:
> We are trying to evaluate dmclock's effect on controlling the recovery
> traffic in order to reducing impact on client io.
> However, we are experiencing some problem and didn't get our expected results.
> 
> we setup a small cluster, with several OSD machines. In our
> configurations, we set recovery limit = 0.001 or even smaller, and
> res=0.0, wgt=1.0.
> we set client res = 20k or even higher, limit=0.0, wgt=500.

As presently implemented limits are not enforced. There is a PR that makes modification to enforce them (https://github.com/ceph/ceph/pull/16242), which I’m still evaluating. You can see some discussion of the issue (see: http://marc.info/?l=ceph-devel&m=149867479701646&w=2).

> Then we killed osd while doing fio on client side and bring it back to
> trigger recovery. We saw fio iops still reduced a lot comparable to
> not using dmclock queue. We did some debugging and saw that when
> recovery is active, fio requests enqueued much less frequent than
> before.

Are you saying that fio requests from the client slow down? I assume you’re using the fio tool. If so, what is max-jobs set to?

Also, are you saying that that fio iops had lower values with mclock compared with the weighted priority queue (“wpq”)?

> overall, seems dmclock's configuration on recovery part does not show
> any differences. Since the enqueue rate of fio requests are reduced,
> when dmclock try to dequeue a request, there's less chance to pull a
> fio request.

Theoretically, at least, with a higher reservation value, the request tags should have smaller reservation tags, which should bias mclock to dequeueing them. So I’d like to know more about your experiment.

Thank you,

Eric
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux