Thank you for your response quickly. My cluster version is 18.2.1 and the OSD's scheduler is mClick. I set osd_mclock_override_recovery_settings to true when I was changing osd_max_backfills & osd_recovery_max_active. Do I set osd_mclock_override_recovery_settings to true when I change osd_recovery_max_single_start? Could you explain more what osd_recovery_max_single_start is? I wonder whether it affects the number of "backfills_in_flight" & "recovering" in pg query. 2025년 1월 31일 (금) 오후 5:05, Frédéric Nass <frederic.nass@xxxxxxxxxxxxxxxx>님이 작성: > Hi, > > What version of Ceph is this? Which scheduler is in use? mClock or WPQ? > > $ ceph config show osd.0 osd_op_queue > > If the OSD's scheduler is mClock then unless you set > osd_mclock_override_recovery_settings to true (not recommended), changing > osd_max_backfills and osd_recovery_max_active will have no impact. > mClock requires using profiles. Read this [1]. > > Regards, > Frédéric. > > [1] > https://docs.ceph.com/en/reef/rados/configuration/mclock-config-ref/?#recovery-backfill-options > > ----- Le 31 Jan 25, à 8:25, Jaemin Joo jm7.joo@xxxxxxxxx a écrit : > > > Hi, all > > > > I am suffering from slow backfilling and recovering. > > I increased to "max_backfills" = 64, "recovery_max_actives" = 64. I know > > it's unnessary to increase more in my ceph cluster. > > When I checked pg query, I found the number of > > "backfills_in_flight"&"recovering" is 4. > > I set "osd_recovery_max_single_start = 4" so I think > > "osd_recovery_max_single_start" affects to the number of > > "backfills_in_flight"&"recovering". > > I increased to "osd_recovery_max_single_start = 16" but the number of > > "backfills_in_flight"&"recovering" in pg query was not changed. > > I know "osd_recovery_max_single_start" runtime is true. > > Should I restart the osds to increase the number of > > "backfills_in_flight"&"recovering"? or Is there the other way? > > _______________________________________________ > > ceph-users mailing list -- ceph-users@xxxxxxx > > To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx