Re: CephFS client side metadata ops throttling based on quotas

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 28 Feb 2019 at 09:50, Xiaoxi Chen <superdebuger@xxxxxxxxx> wrote:
>
> I doubt throttling can works.
>
> If your two workloads can stay in separate namespace(i.e not sharing any data), you can easily achieve the isolation by multi-mds +dir_pin.
>
> If your burst workload and your regular workload are targeting same namespace, then it may cause more lock contention and in general make things worse....e.g "ls -al" which will translate to a readdir and a lot of getattr, and holding the read lock of the dir. If you throttle the getattr which leads the read lock being taken for longer time and blocks those writers.

Hi, it seems that readdir wouldn't keep holding the read lock dir, if
there's concurrent client that's requesting write caps of the dir, the
mds can revoke the dir-reading client's corresponding cap. So it seems
that "ls -al" wouldn't block other writers for long, am I right?
Thanks:-)

>
> Xuehan Xu <xxhdx1985126@xxxxxxxxx> 于2019年2月27日周三 上午11:59写道:
>>
>> >
>> > client doesn't know global metadata load. how does it avoid throttling
>> > while global load is relatively low?
>> >
>>
>> In our scenario, the number of metadata ios issued by clients is
>> relatively low. But there exists some bursty workloads on a small
>> number of clients. So, we think maybe we can implement a client
>> throttler like that in RBD(https://github.com/ceph/ceph/pull/17032),
>> and set to threshold to a value relative higher than the throughput
>> needed most of the time, which would cause problem in ordinary cases
>> and prevent those bursty clients use up cluster resources. Is this
>> feasible?
>>
>> And, by the way, do you think dmclock can also be used for file system
>> metadata io qos? Thanks:-)




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux