Re: slow backfilling and recovering

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

What version of Ceph is this? Which scheduler is in use? mClock or WPQ?

$ ceph config show osd.0 osd_op_queue

If the OSD's scheduler is mClock then unless you set osd_mclock_override_recovery_settings to true (not recommended), changing osd_max_backfills and osd_recovery_max_active will have no impact.
mClock requires using profiles. Read this [1].

Regards,
Frédéric.

[1] https://docs.ceph.com/en/reef/rados/configuration/mclock-config-ref/?#recovery-backfill-options

----- Le 31 Jan 25, à 8:25, Jaemin Joo jm7.joo@xxxxxxxxx a écrit :

> Hi, all
> 
> I am suffering from slow backfilling and recovering.
> I increased to "max_backfills" = 64, "recovery_max_actives" = 64. I know
> it's unnessary to increase more in my ceph cluster.
> When I checked pg query, I found the number of
> "backfills_in_flight"&"recovering" is 4.
> I set "osd_recovery_max_single_start = 4" so I think
> "osd_recovery_max_single_start" affects to the number of
> "backfills_in_flight"&"recovering".
> I increased to "osd_recovery_max_single_start = 16" but the number of
> "backfills_in_flight"&"recovering" in pg query was not changed.
> I know "osd_recovery_max_single_start" runtime is true.
> Should I restart the osds to increase the number of
> "backfills_in_flight"&"recovering"? or Is there the other way?
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux