Hi,
you could lower the recovery settings to the default and see if that helps:
osd_max_backfills = 1
osd_recovery_max_active = 3
Regards,
Eugen
Zitat von Kamil Szczygieł <kamil@xxxxxxxxxxxx>:
Hi,
We're running Octopus and we've 3 control plane nodes (12 core, 64
GB memory each) that are running mon, mds and mgr and also 4 data
nodes (12 core, 256 GB memory, 13x10TB HDDs each). We've increased
number of PGs inside our pool, which resulted in all OSDs going
crazy and reading the average of 900 M/s constantly (based on iotop).
This has resulted in slow ops and very low recovery speed. Any tips
on how to handle this kind of situation? We've
osd_recovery_sleep_hdd set to 0.2, osd_recovery_max_active set to 5
and osd_max_backfills set to 4. Some OSDs are reporting slow ops
constantly and iowait on machines is at 70-80% constantly.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx