Re: Decrepit ceph cluster performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Aug 13, 2023 at 11:05 PM Tyler Stachecki
<stachecki.tyler@xxxxxxxxx> wrote:
> Also a good point. OP: do you have any non-standard ceph.conf
> settings?

osd backfill full ratio = 0.90
osd op queue cut off = high
osd max backfills = 1
osd recovery max active = 1
osd recovery op priority = 1
osd memory target = 8589934592

Other than the memory target (which is rarely reached), these are all
designed to allow the system to continue to serve clients at all if a
recovery is in progress. (E.g., when we converted filestore to
bluestore a few years ago and had to rebuild all the OSDs one by one.)

Thanks!
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux