Re: Adding new OSDs - also adding PGs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It depends on the cluster. In general I would say if your PG count is
already good in terms of PG-per-OSD (say between 100 and 200 each) add
capacity and then re-evaluate your PG count after.

If you have a lot of time before the gear will be racked and could undergo
some PG splits before the new gear is integrated you may want to get that
work done now.

Respectfully,

*Wes Dillingham*
LinkedIn <http://www.linkedin.com/in/wesleydillingham>
wes@xxxxxxxxxxxxxxxxx




On Tue, Jun 4, 2024 at 4:27 PM Erich Weiler <weiler@xxxxxxxxxxxx> wrote:

> Hi All,
>
> I'm going to be adding a bunch of OSDs to our cephfs cluster shortly
> (increasing the total size by 50%).  We're on reef, and will be
> deploying using the cephadm method, and the OSDs are exactly the same
> size and disk type as the current ones.
>
> So, after adding the new OSDs, my understanding is that ceph will begin
> rebalancing the data.  I will also probably want to increase my PGs to
> accommodate the new OSDs being added.  My question is basically: should
> I wait for the rebalance to finish before increasing my PG count?  Which
> would kick off another relabance action for the new PGs?  Or, should I
> increase the PG count as soon as the rebalance action starts after
> adding the new OSDs, and it would then create new PGs and rebalance on
> the new OSDs at the same time?
>
> Thanks for any guidance!
>
> -erich
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux