Re: slow "rados ls"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 02/09/2020 12:07, Stefan Kooman wrote:
On 2020-09-01 10:51, Marcel Kuiper wrote:
As a matter of fact we did. We doubled the storage nodes from 25 to 50.
Total osds now 460.

You want to share your thoughts on that?

Yes. We observed the same thing with expansions. The OSDs will be very
busy (with multiple threads per OSD) on housekeeping after the OMAP data
has been moved to another OSD (and eating up all CPU power it can get).
But even after that there is a lot of garbage left behind not gets
cleaned up. At least not with regular housekeeping / online compaction.
Manual compaction for clusters with a lot of OMAP data feels like a
necessity (and ideally shouln't be).

Indeed, it shouldn't be.

This config option should make it easier in a future release: https://github.com/ceph/ceph/commit/93e4c56ecc13560e0dad69aaa67afc3ca053fb4c

[osd]
osd_compact_on_start = true

Then just restart the OSDs and they will compact on boot. No need for external scripts. Just put this into the ceph.conf.

The mon config store won't work as there is no connection with the Monitors at that point in the code.

Wido


Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux