Re: slow "rados ls"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2020-09-01 10:51, Marcel Kuiper wrote:
> As a matter of fact we did. We doubled the storage nodes from 25 to 50.
> Total osds now 460.
> 
> You want to share your thoughts on that?

Yes. We observed the same thing with expansions. The OSDs will be very
busy (with multiple threads per OSD) on housekeeping after the OMAP data
has been moved to another OSD (and eating up all CPU power it can get).
But even after that there is a lot of garbage left behind not gets
cleaned up. At least not with regular housekeeping / online compaction.
Manual compaction for clusters with a lot of OMAP data feels like a
necessity (and ideally shouln't be).

Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux