Re: How to throttle operations like "rbd rm"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Paul,

Am 14. Juni 2018 00:33:09 MESZ schrieb Paul Emmerich <paul.emmerich@xxxxxxxx>:
>2018-06-13 23:53 GMT+02:00 <ceph@xxxxxxxxxx>:
>
>> Hi yao,
>>
>> IIRC there is a *sleep* Option which is usefull when delete Operation
>is
>> being done from ceph.... sleep_trim or something like that.
>>
>
>you are thinking of "osd_snap_trim_sleep" which is indeed a very
>helpful
>option - but not for deletions.
>It rate limites snapshot deletion only.
>
Yes that is what i have meant :)

So there isnt a way to throttle normal delete like this?

- Mehmet  

>Paul
>
>
>>
>> - Mehmet
>>
>> Am 7. Juni 2018 04:11:11 MESZ schrieb Yao Guotao
><yaoguo_tao@xxxxxxx>:
>>>
>>> Hi Jason,
>>>
>>> Thank you very much for your reply.
>>> I think the RBD trash is a good way. But, the QoS in Ceph is a
>better
>>> solution.
>>> I am looking forward to the backend QoS of Ceph.
>>>
>>> Thanks.
>>>
>>>
>>> At 2018-06-06 21:53:23, "Jason Dillaman" <jdillama@xxxxxxxxxx>
>wrote:
>>> >The 'rbd_concurrent_management_ops' setting controls how many
>>> >concurrent, in-flight RADOS object delete operations are possible
>per
>>> >image removal. The default is only 10, so given ten 10 images being
>>> >deleted concurrently, I am actually surprised that blocked all IO
>from
>>> >your VMs.
>>> >
>>> >Adding support for limiting the maximum number of concurrent image
>>> >deletions would definitely be an OpenStack enhancement. There is an
>>> >open blueprint for optionally utilizing the RBD trash instead of
>>> >having Cinder delete the images [1], which would allow you to defer
>>> >deletions to whenever is convenient. Additionally, once Ceph adds
>>> >support for backend QoS (fingers crossed in Nautilus), we can
>change
>>> >librbd to flag all IO for maintenance activities to background
>(best
>>> >effort) priority, which might be the best long-term solution.
>>> >
>>> >[1]
>https://blueprints.launchpad.net/cinder/+spec/rbd-deferred-volume-deletion
>>> >
>>> >On Wed, Jun 6, 2018 at 6:40 AM, Yao Guotao <yaoguo_tao@xxxxxxx>
>wrote:
>>> >> Hi Cephers,
>>> >>
>>> >> We use Ceph with Openstack by librbd library.
>>> >>
>>> >> Last week, my colleague delete 10 volumes from Openstack
>dashboard at the
>>> >> same time, each volume has about 1T used.
>>> >> During this time, the disk of OSDs are busy, and there have no
>I/O for
>>> >> normal vm.
>>> >>
>>> >> So, I want to konw if there are any parameters that can be set to
>throttle?
>>> >>
>>> >> I find a parameter about rbd op is
>'rbd_concurrent_management_ops'.
>>> >> I am trying to figure out how it works in code, and I find the
>parameter can
>>> >> only control the asyncchronous deletion of all objects of an
>image.
>>> >>
>>> >> Besides, Should it be controlled at Openstack Nova or Cinder
>layer?
>>> >>
>>> >> Thanks,
>>> >> Yao Guotao
>>> >>
>>> >>
>>> >>
>>> >>
>>> >>
>>> >> _______________________________________________
>>> >> ceph-users mailing list
>>> >> ceph-users@xxxxxxxxxxxxxx
>>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> >>
>>> >
>>> >
>>> >
>>> >--
>>> >Jason
>>>
>>>
>>>
>>>
>>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux