Scrubs Randomly Starting/Stopping

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Have just upgraded a cluster from 17.2.7 to 18.2.1

Everything is working as expected apart from the amount of scrubs & deep scrubs is bouncing all over the place every second.

I have the value set to 1 per OSD but currently the cluster reckons one minute it’s doing 60+ scrubs, and then second this will drop to 40, then back to 70.

If I check the ceph live log’s I can see every second it’s reporting multiple PG’s starting either a scrub or deep scrub, it does not look like these are actually running as isn’t having a negative effect on the cluster’s performance.

Is this something to be expected off the back of the upgrade and should sort it self out?

A sample of the logs:

2024-02-24T00:41:20.055401+0000 osd.54 (osd.54) 3160 : cluster 0 12.9a deep-scrub starts
2024-02-24T00:41:19.658144+0000 osd.41 (osd.41) 4103 : cluster 0 12.cd deep-scrub starts
2024-02-24T00:41:19.823910+0000 osd.33 (osd.33) 5625 : cluster 0 12.ae deep-scrub starts
2024-02-24T00:41:19.846736+0000 osd.65 (osd.65) 3947 : cluster 0 12.53 deep-scrub starts
2024-02-24T00:41:20.007331+0000 osd.20 (osd.20) 7214 : cluster 0 12.142 scrub starts
2024-02-24T00:41:20.114748+0000 osd.10 (osd.10) 6538 : cluster 0 12.2c deep-scrub starts
2024-02-24T00:41:20.247205+0000 osd.36 (osd.36) 4789 : cluster 0 12.16f deep-scrub starts
2024-02-24T00:41:20.908051+0000 osd.68 (osd.68) 3869 : cluster 0 12.d7 deep-scrub starts
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux