Re: What's the best way to add numerous OSDs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Fabien,

additional to what Anthony said you could do the following:

- `ceph osd set nobackfill` to disable initial backfilling
- `ceph config set osd osd_mclock_override_recovery_settings true` to
override the mclock sheduler backfill settings
- Let the orchestrator add one host each time. I would wait between each
host until all the peering and stuff is done and only the backfilling is
left over. From my experience adding a whole host is not a problem, unless
you are hit by the pglog_dup bug (was fixed in pacific IIRC)
- `ceph tell 'osd.*' injectargs '--osd-max-backfills 1'` to limit the
backfilling as much as possible
- `ceph osd unset nobackfill` to start the actuall backfill process
- `ceph config set osd osd_mclock_override_recovery_settings false` after
backfilling is done. I would restart all OSDs after that to make sure the
OSDs got the correct backfilling values :)

Make sure your mons have enough oomph to handle the workload.

At least, that would be my approach when adding that amount of disks.
Usually I Only add 36 disks at a time, when capacity get a little low :)



Am Di., 6. Aug. 2024 um 17:10 Uhr schrieb Fabien Sirjean <
fsirjean@xxxxxxxxxxxx>:

> Hello everyone,
>
> We need to add 180 20TB OSDs to our Ceph cluster, which currently
> consists of 540 OSDs of identical size (replicated size 3).
>
> I'm not sure, though: is it a good idea to add all the OSDs at once? Or
> is it better to add them gradually?
>
> The idea is to minimize the impact of rebalancing on the performance of
> CephFS, which is used in production.
>
> Thanks in advance for your opinions and feedback 🙂
>
> Wishing you a great summer,
>
> Fabien
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>


-- 
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
groüen Saal.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux