Re: Wide EC pool causes very slow backfill?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sridhar

Thank you for the suggestion and the link. We'll stick with wpq for now since it seems to work ok then upgrade to reef when we are HEALTH_OK, and then back to mclock.

Mvh.

Torkil

On 18-01-2024 13:06, Sridhar Seshasayee wrote:
Hi,


Given that the first host added had 19 OSDs, with none of them anywhere
near the target capacity, and the one we just added has 22 empty OSDs,
having just 22 PGs backfilling and 1 recovering seems somewhat
underwhelming.

Is this to be expected with such a pool? Mclock profile is
high_recovery_ops.


Since you are already using the high_recovery_ops profile, you could
additionally try incrementing
"osd_max_backfills" (default: 1) by a small amount (2 or 3) using the
following procedure and see
if it improves the backfill rate:

https://docs.ceph.com/en/quincy/rados/configuration/mclock-config-ref/#steps-to-modify-mclock-max-backfills-recovery-limits

-Sridhar
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

--
Torkil Svensgaard
Systems Administrator
Danish Research Centre for Magnetic Resonance DRCMR, Section 714
Copenhagen University Hospital Amager and Hvidovre
Kettegaard Allé 30, 2650 Hvidovre, Denmark
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux