On 21/12/17 10:28, Burkhard Linke wrote: > OSD config section from ceph.conf: > > [osd] > osd_scrub_sleep = 0.05 > osd_journal_size = 10240 > osd_scrub_chunk_min = 1 > osd_scrub_chunk_max = 1 > max_pg_per_osd_hard_ratio = 4.0 > osd_max_pg_per_osd_hard_ratio = 4.0 > bluestore_cache_size_hdd = 5368709120 > mon_max_pg_per_osd = 400 Consider also playing with the following OSD parameters: osd_recovery_max_active osd_recovery_sleep osd_recovery_sleep_hdd osd_recovery_sleep_hybrid osd_recovery_sleep_ssd In my anecdotal experience, the forced wait between requests (controlled by the recovery_sleep parameters) was causing significant slowdown in recovery speed in my cluster, though even at the default values it wasn't making things go nearly as slowly as your cluster - it sounds like something else is probably wrong. Rich
Attachment:
signature.asc
Description: OpenPGP digital signature
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com