Scrub and deep-scrub repeating over and over

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

we have several PG's with repeating scrub tasks. As soon as scrub is complete, it starts again. You can get an idea from the log bellow:

$ ceph -w | grep -i "11.34a"
2016-09-08 08:28:33.346798 osd.24 [INF] 11.34a scrub ok
2016-09-08 08:28:37.319018 osd.24 [INF] 11.34a scrub starts
2016-09-08 08:28:39.363732 osd.24 [INF] 11.34a scrub ok
2016-09-08 08:28:41.319834 osd.24 [INF] 11.34a scrub starts
2016-09-08 08:28:43.411455 osd.24 [INF] 11.34a scrub ok
2016-09-08 08:28:45.320538 osd.24 [INF] 11.34a scrub starts
2016-09-08 08:28:47.308737 osd.24 [INF] 11.34a scrub ok
2016-09-08 08:28:55.322159 osd.24 [INF] 11.34a scrub starts
2016-09-08 08:28:57.362063 osd.24 [INF] 11.34a scrub ok
2016-09-08 08:29:00.322918 osd.24 [INF] 11.34a scrub starts
2016-09-08 08:29:02.418139 osd.24 [INF] 11.34a scrub ok
2016-09-08 08:29:07.324022 osd.24 [INF] 11.34a scrub starts
2016-09-08 08:29:09.469796 osd.24 [INF] 11.34a scrub ok
2016-09-08 08:29:12.324752 osd.24 [INF] 11.34a scrub starts
2016-09-08 08:29:14.353026 osd.24 [INF] 11.34a scrub ok
2016-09-08 08:29:17.325801 osd.24 [INF] 11.34a scrub starts
2016-09-08 08:29:19.446962 osd.24 [INF] 11.34a scrub ok
2016-09-08 08:29:22.326297 osd.24 [INF] 11.34a scrub starts
2016-09-08 08:29:24.389610 osd.24 [INF] 11.34a scrub ok
2016-09-08 08:29:29.327707 osd.24 [INF] 11.34a deep-scrub starts
2016-09-08 08:37:13.887668 osd.24 [INF] 11.34a deep-scrub ok
2016-09-08 08:37:18.383127 osd.24 [INF] 11.34a scrub starts
2016-09-08 08:37:20.700806 osd.24 [INF] 11.34a scrub ok
2016-09-08 08:37:27.385027 osd.24 [INF] 11.34a deep-scrub starts
2016-09-08 08:44:36.073670 osd.24 [INF] 11.34a deep-scrub ok
2016-09-08 08:44:44.438164 osd.24 [INF] 11.34a scrub starts
2016-09-08 08:44:47.017694 osd.24 [INF] 11.34a scrub ok
2016-09-08 08:44:58.441510 osd.24 [INF] 11.34a scrub starts
2016-09-08 08:45:00.524666 osd.24 [INF] 11.34a scrub ok
2016-09-08 08:45:01.441945 osd.24 [INF] 11.34a scrub starts
2016-09-08 08:45:03.585039 osd.24 [INF] 11.34a scrub ok
2016-09-08 08:45:07.443524 osd.24 [INF] 11.34a deep-scrub starts
2016-09-08 08:52:16.020630 osd.24 [INF] 11.34a deep-scrub ok
2016-09-08 08:52:18.494388 osd.24 [INF] 11.34a scrub starts
2016-09-08 08:52:20.519264 osd.24 [INF] 11.34a scrub ok
2016-09-08 08:52:23.495231 osd.24 [INF] 11.34a scrub starts
2016-09-08 08:52:25.514784 osd.24 [INF] 11.34a scrub ok
2016-09-08 08:52:29.496117 osd.24 [INF] 11.34a scrub starts
2016-09-08 08:52:31.505832 osd.24 [INF] 11.34a scrub ok
2016-09-08 08:52:34.496818 osd.24 [INF] 11.34a scrub starts
2016-09-08 08:52:36.475993 osd.24 [INF] 11.34a scrub ok
2016-09-08 08:52:36.497652 osd.24 [INF] 11.34a scrub starts
2016-09-08 08:52:38.483388 osd.24 [INF] 11.34a scrub ok
2016-09-08 08:52:41.498299 osd.24 [INF] 11.34a scrub starts
2016-09-08 08:52:43.509776 osd.24 [INF] 11.34a scrub ok
2016-09-08 08:52:45.498929 osd.24 [INF] 11.34a deep-scrub starts

Some options from the cluster:
# ceph daemon /var/run/ceph/ceph-osd.24.asok config show | grep -i "scrub"
    "mon_warn_not_scrubbed": "0",
    "mon_warn_not_deep_scrubbed": "0",
    "mon_scrub_interval": "86400",
    "mon_scrub_timeout": "300",
    "mon_scrub_max_keys": "100",
    "mon_scrub_inject_crc_mismatch": "0",
    "mon_scrub_inject_missing_keys": "0",
    "mds_max_scrub_ops_in_progress": "5",
    "osd_scrub_invalid_stats": "true",
    "osd_max_scrubs": "1",
    "osd_scrub_begin_hour": "23",
    "osd_scrub_end_hour": "7",
    "osd_scrub_load_threshold": "10",
    "osd_scrub_min_interval": "86400",
    "osd_scrub_max_interval": "604800",
    "osd_scrub_interval_randomize_ratio": "0.5",
    "osd_scrub_chunk_min": "5",
    "osd_scrub_chunk_max": "25",
    "osd_scrub_sleep": "0",
    "osd_scrub_auto_repair": "false",
    "osd_scrub_auto_repair_num_errors": "5",
    "osd_deep_scrub_interval": "604800",
    "osd_deep_scrub_randomize_ratio": "0.15",
    "osd_deep_scrub_stride": "524288",
    "osd_deep_scrub_update_digest_min_age": "7200",
    "osd_debug_scrub_chance_rewrite_digest": "0",
    "osd_scrub_priority": "5",
    "osd_scrub_cost": "52428800",


Deep-scrub and scrub dates are updated after each operation in pg dump. Does anyone have ideas why this is happening and how to solve this? Pool size is 2, if this matter.

Thanks for any ideas!

Arvydas



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux