Questions about PG auto-scaling and node addition

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Hello,

We have a cluster with 21 nodes, each having 12 x 18TB, and 2 NVMe for db/wal. 
We need to add more nodes. 
The last time we did this, the PGs remained at 1024, so the number of PGs per OSD decreased. 
Currently, we are at 43 PGs per OSD.

Does auto-scaling work correctly in Ceph version 17.2.5?
Should we increase the number of PGs before adding nodes?
Should we keep PG auto-scaling active?

If we disable auto-scaling, should we increase the number of PGs to reach 100 PGs per OSD?

Considering that we use this cluster with a large EC pool (8+3).

Thank you for your assistance.

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux