pg deep-scrub control scheme

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello everyone.

I have a cluster with 8321 pgs and recently I started to get pg not
deep-scrub warnings.
The reason is that I reduced max_scrub to avoid the impact of scrub on IO.

Here is my current scrub configuration:

~]# ceph tell osd.1 config show|grep scrub
"mds_max_scrub_ops_in_progress": "5",
"mon_scrub_inject_crc_mismatch": "0.000000",
"mon_scrub_inject_missing_keys": "0.000000",
"mon_scrub_interval": "86400",
"mon_scrub_max_keys": "100",
"mon_scrub_timeout": "300",
"mon_warn_pg_not_deep_scrubbed_ratio": "0.800000",
"mon_warn_pg_not_scrubbed_ratio": "0.500000",
"osd_debug_deep_scrub_sleep": "0.000000",
"osd_deep_scrub_interval": "1296000.000000",
"osd_deep_scrub_keys": "1024",
"osd_deep_scrub_large_omap_object_key_threshold": "200000",
"osd_deep_scrub_large_omap_object_value_sum_threshold": "1073741824",
"osd_deep_scrub_randomize_ratio": "0.080000",
"osd_deep_scrub_stride": "131072",
"osd_deep_scrub_update_digest_min_age": "7200",
"osd_max_scrubs": "1",
"osd_requested_scrub_priority": "120",
"osd_scrub_auto_repair": "false",
"osd_scrub_auto_repair_num_errors": "5",
"osd_scrub_backoff_ratio": "0.660000",
"osd_scrub_begin_hour": "0",
"osd_scrub_begin_week_day": "0",
"osd_scrub_chunk_max": "25",
"osd_scrub_chunk_min": "5",
"osd_scrub_cost": "52428800",
"osd_scrub_during_recovery": "false",
"osd_scrub_end_hour": "0",
"osd_scrub_end_week_day": "0",
"osd_scrub_extended_sleep": "0.000000",
"osd_scrub_interval_randomize_ratio": "0.500000",
"osd_scrub_invalid_stats": "true",
"osd_scrub_load_threshold": "0.500000",
"osd_scrub_max_interval": "1296000.000000",
"osd_scrub_max_preemptions": "5",
"osd_scrub_min_interval": "259200.000000",
"osd_scrub_priority": "5",
"osd_scrub_sleep": "0.000000",

I am currently trying to adjust the interval of scrub.

Is there a calculation formula that can be used to easily configure
the scrub/deepscrub strategy?

At present, there is only the adjustment of individual values, and
then it is a long wait, and there may be no progress in the end.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux