Re: slow "rados ls"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Wido/Joost

pg_num is 64. It is not that we use 'rados ls' for operations. We just
noticed as a difference that on this cluster it takes about 15 seconds to
return on pool .rgw.root or rc3-se.rgw.buckets.index and our other
clusters return almost instantaniously

Is there a way that I can determine from statistics that manual compaction
might help (besides doing the compaction and notice the difference in
behaviour). Any pointers in investigating this further would be much
appreciated

Is there operational impact to be expected when compacting manually?

Kind Regards

Marcel Kuiper

>
>
> On 26/08/2020 15:59, Stefan Kooman wrote:
>> On 2020-08-26 15:20, Marcel Kuiper wrote:
>>> Hi Vladimir,
>>>
>>> no it is the same on all monitors. Actually I got triggered because I
>>> got
>>> slow responses on my rados gateway with the radosgw-admin command and
>>> narrowed it down to slow respons for rados commands anywhere in the
>>> cluster.
>>
>> Do you have a very large amount of objects. And / or a lot of OMAP data
>> and thus large rocksdb databases? We have seen slowness (and slow ops)
>> from having very large rocksdb databases due to a lot of OMAP data
>> concentrated on only a few nodes (cephfs metadata only). You might
>> suffer from the same thing.
>>
>> Manual rocksdb compaction on the OSDs might help.
>
> In addition: Keep in mind that RADOS was never designed to list objects
> fast. The more Placement Groups you have the slower a listing will be.
>
> Wido
>
>>
>> Gr. Stefan
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux