Re: FW: Ceph dmClock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Nov 29, 2016 at 11:59 AM, J. Eric Ivancich <ivancich@xxxxxxxxxx> wrote:
> On 11/27/2016 04:50 AM, Byung Su Park wrote:
>> P4
>> : To see current QoS quality between clients in the Ceph cluster with
>> the pool (LibRADOS) unit based mclock operation queue, we did some tests.
>> Although each the client’s moment IO variation was present, under some
>> test conditions, a satisfactory QoS result came out in terms of the
>> average value.
>> (Note that, currently the some IO variation also appears at the default
>> WPQ operation queue).
>> (Also, additional experimentation and analysis is required with various
>> test conditions and issues).
>> The specific test environment and result are attached in additional pdf
>> file.
>
> The results look very impressive. I've seen the IO variation in my tests
> as well, and it seems unlikely to be due to operation queuing.

If you have not also set "osd_op_queue_cut_off = low" in addition to
"osd_op_queue = wpq", then you should see much better variation in op
latency between clients. There seems to be an issue with snapshots and
cut_off low at the moment, so use caution with snapshots. [1] I can't
pull up the tracker at the moment to know that status.

[1] http://tracker.ceph.com/issues/15774

----------------
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux