Clarifications about automatic PG scaling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear Ceph users,

I'm setting up a cluster, at the moment I have 56 OSDs for a total available space of 109 TiB, and an erasure coded pool with a total occupancy of just 90 GB. The autoscale mode for the pool is set to "on", but I still have just 32 PGs. As far as I understand (admittedly not that much, I'm a Ceph beginner) the rule of thumb says that 100 PGs per OSD is the reference, so I would expect that the autoscaler should increase the PGs count, but that's not the case. If my expectation is correct then I can't understand whether it's a config issue, and eventually which config options should I tweak, or it's expected behavior since e.g. the occupancy is very low and thus PG are not scaled up.
Any help/hint is really appreciated.
Thanks in advance,

Nicola
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux