OSDs spam log with scrub starts

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On our 16.2.14 CephFS cluster, all OSDs are spamming logs with messages like "log_channel(cluster) log [DBG] : xxx scrub starts". All OSDs are concerned, for different PGs. Cluster is healthy without any recovery ops.

For a single PG, we can have hundreds of scrub starts msg in less than an hour. With 720 OSDs (8k PG, EC8+2), it can lead to millions of messages by hour...
For example with PG 3.1d57 or||3.1988 :

|Aug 31 16:02:09 ceph-86cd8a68-7649-11ed-b2be-5cba2c7fdb30-osd-58[1310188]: debug 2023-08-31T14:02:09.453+0000 7fdab1ec4700  0 log_channel(cluster) log [DBG] : 3.1d57 scrub starts|| ||Aug 31 16:02:11 ceph-86cd8a68-7649-11ed-b2be-5cba2c7fdb30-osd-58[1310188]: debug 2023-08-31T14:02:11.446+0000 7fdab1ec4700  0 log_channel(cluster) log [DBG] : 3.1d57 scrub starts|| ||Aug 31 16:02:12 ceph-86cd8a68-7649-11ed-b2be-5cba2c7fdb30-osd-58[1310188]: debug 2023-08-31T14:02:12.428+0000 7fdab1ec4700  0 log_channel(cluster) log [DBG] : 3.1d57 scrub starts|| ||Aug 31 16:02:13 ceph-86cd8a68-7649-11ed-b2be-5cba2c7fdb30-osd-58[1310188]: debug 2023-08-31T14:02:13.456+0000 7fdab1ec4700  0 log_channel(cluster) log [DBG] : 3.1d57 scrub starts|| ||Aug 31 16:02:14 ceph-86cd8a68-7649-11ed-b2be-5cba2c7fdb30-osd-58[1310188]: debug 2023-08-31T14:02:14.431+0000 7fdab1ec4700  0 log_channel(cluster) log [DBG] : 3.1d57 scrub starts|| ||Aug 31 16:02:15 ceph-86cd8a68-7649-11ed-b2be-5cba2c7fdb30-osd-58[1310188]: debug 2023-08-31T14:02:15.475+0000 7fdab1ec4700  0 log_channel(cluster) log [DBG] : 3.1d57 scrub starts|| ||Aug 31 16:02:21 ceph-86cd8a68-7649-11ed-b2be-5cba2c7fdb30-osd-58[1310188]: debug 2023-08-31T14:02:21.516+0000 7fdab1ec4700  0 log_channel(cluster) log [DBG] : 3.1d57 scrub starts|| ||Aug 31 16:02:23 ceph-86cd8a68-7649-11ed-b2be-5cba2c7fdb30-osd-58[1310188]: debug 2023-08-31T14:02:23.555+0000 7fdab1ec4700  0 log_channel(cluster) log [DBG] : 3.1d57 scrub starts|| ||Aug 31 16:02:24 ceph-86cd8a68-7649-11ed-b2be-5cba2c7fdb30-osd-58[1310188]: debug 2023-08-31T14:02:24.510+0000 7fdab1ec4700  0 log_channel(cluster) log [DBG] : 3.1d57 deep-scrub starts||

Aug 31 16:02:10 ceph-86cd8a68-7649-11ed-b2be-5cba2c7fdb30-osd-276[1325507]: debug 2023-08-31T14:02:10.384+0000 7f0606ce3700  0 log_channel(cluster) log [DBG] : 3.1988 deep-scrub starts Aug 31 16:02:11 ceph-86cd8a68-7649-11ed-b2be-5cba2c7fdb30-osd-276[1325507]: debug 2023-08-31T14:02:11.377+0000 7f0606ce3700  0 log_channel(cluster) log [DBG] : 3.1988 scrub starts Aug 31 16:02:13 ceph-86cd8a68-7649-11ed-b2be-5cba2c7fdb30-osd-276[1325507]: debug 2023-08-31T14:02:13.383+0000 7f0606ce3700  0 log_channel(cluster) log [DBG] : 3.1988 scrub starts Aug 31 16:02:15 ceph-86cd8a68-7649-11ed-b2be-5cba2c7fdb30-osd-276[1325507]: debug 2023-08-31T14:02:15.383+0000 7f0606ce3700  0 log_channel(cluster) log [DBG] : 3.1988 deep-scrub starts Aug 31 16:02:17 ceph-86cd8a68-7649-11ed-b2be-5cba2c7fdb30-osd-276[1325507]: debug 2023-08-31T14:02:17.336+0000 7f0606ce3700  0 log_channel(cluster) log [DBG] : 3.1988 scrub starts Aug 31 16:02:19 ceph-86cd8a68-7649-11ed-b2be-5cba2c7fdb30-osd-276[1325507]: debug 2023-08-31T14:02:19.328+0000 7f0606ce3700  0 log_channel(cluster) log [DBG] : 3.1988 scrub starts
||
||PG_STAT  OBJECTS  MISSING_ON_PRIMARY  DEGRADED MISPLACED  UNFOUND  BYTES         OMAP_BYTES*  OMAP_KEYS*  LOG DISK_LOG  STATE STATE_STAMP                      VERSION          REPORTED UP                                         UP_PRIMARY ACTING                                     ACTING_PRIMARY LAST_SCRUB       SCRUB_STAMP                      LAST_DEEP_SCRUB DEEP_SCRUB_STAMP                 SNAPTRIMQ_LEN|| ||3.1d57     52757                   0         0 0        0  167596026648            0           0   1799 1799                 active+clean 2023-08-31T14:27:24.025755+0000   236010'4532653 236011:8745383      [58,421,335,9,59,199,390,481,425,480] 58      [58,421,335,9,59,199,390,481,425,480]              58 231791'4531915 *2023-08-29T22:41:12.266874+0000* 229377'4526369 *2023-08-26T04:30:42.894505+0000* 0| |3.1988     52867                   0         0 0        0  168603872808            0           0   1811 1811                 active+clean 2023-08-31T14:32:13.361420+0000   236018'4241611 236018:9815753    [276,342,345,299,210,349,85,481,446,46] 276    [276,342,345,299,210,349,85,481,446,46]             276 236012'4241602 *2023-08-31T14:32:13.361343+0000* 228157'4229095 *2023-08-24T05:59:16.573471+0000*|

However scrub is working fine, scrub stamp looks OK (2023-08-29 or 2023-08-31) as we have default value for scrub interval (min 24h / max 7days). I tried to play with scrub parameters like osd_scrub_load_threshold (->20), osd_max_scrubs (->3), osd_scrub_*_interval but nothing better.

Any idea what's going on and how to fix this?

Cheers,
Adrien
||
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux