Re: OSDs crush - Since Pacific

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 8/22/22 11:50, Wissem MIMOUNA wrote:
Dear All,

After updating our ceph cluster from Octopus to Pacific , we got a lot of a slow_ops on many osds ( which caused the cluster to become very slow ) .

Do you have any automatic conversion going on? What is the setting of "bluestore_fsck_quick_fix_on_mount" on your cluster OSDs?

ceph daemon osd.$id config get bluestore_fsck_quick_fix_on_mount

We did our investiguation and search on the ceph-users list and we found that rebuilding all OSD scan improve ( or fix ) the issue ( we have a doubt that the cause is a defragmented bluestore filesystem ) ,

RocksDB itself might need to be compacted (offline operation). You might want to stop all OSDs on a host and perform a compaction on them to see if it helps with slow ops [1]:

df|grep "/var/lib/ceph/osd"|awk '{print $6}'|cut -d '-' -f 2|sort -n|xargs -n 1 -P 10 -I OSD ceph-kvstore-tool bluestore-kv /var/lib/ceph/osd/ceph-OSD compact

What is the status of your cluster (ceph -s)?

Gr. Stefan

[1]: https://gist.github.com/wido/b0f0200bd1a2cbbe3307265c5cfb2771
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux