On a Luminous 12.2.7 cluster these are the defaults:
ceph daemon osd.x config show
"osd_scrub_max_interval": "604800.000000",
"osd_scrub_min_interval": "86400.000000",
"osd_scrub_interval_randomize_ratio": "0.500000",
"osd_scrub_chunk_max": "25",
"osd_scrub_chunk_min": "5",
"osd_scrub_priority": "5",
"osd_scrub_sleep": "0.000000",
"osd_deep_scrub_interval": "604800.000000",
"osd_deep_scrub_stride": "524288",
"osd_disk_thread_ioprio_class": "",
"osd_disk_thread_ioprio_priority": "-1",
You can check your differences with the defaults using:
ceph daemon osd.x config diff
ceph daemon osd.x config show
"osd_scrub_max_interval": "604800.000000",
"osd_scrub_min_interval": "86400.000000",
"osd_scrub_interval_randomize_ratio": "0.500000",
"osd_scrub_chunk_max": "25",
"osd_scrub_chunk_min": "5",
"osd_scrub_priority": "5",
"osd_scrub_sleep": "0.000000",
"osd_deep_scrub_interval": "604800.000000",
"osd_deep_scrub_stride": "524288",
"osd_disk_thread_ioprio_class": "",
"osd_disk_thread_ioprio_priority": "-1",
You can check your differences with the defaults using:
ceph daemon osd.x config diff
Kind regards,
Caspar
Caspar
Op di 11 dec. 2018 om 12:36 schreef Janne Johansson <icepic.dz@xxxxxxxxx>:
Den tis 11 dec. 2018 kl 12:26 skrev Caspar Smit <casparsmit@xxxxxxxxxxx>:
>
> Furthermore, presuming you are running Jewel or Luminous you can change some settings in ceph.conf to mitigate the deep-scrub impact:
>
> osd scrub max interval = 4838400
> osd scrub min interval = 2419200
> osd scrub interval randomize ratio = 1.0
> osd scrub chunk max = 1
> osd scrub chunk min = 1
> osd scrub priority = 1
> osd scrub sleep = 0.1
> osd deep scrub interval = 2419200
> osd deep scrub stride = 1048576
> osd disk thread ioprio class = idle
> osd disk thread ioprio priority = 7
>
It would be interesting to see what the defaults for those were, so
one can see which go up and which go down.
--
May the most significant bit of your life be positive.
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com