Pacific PG count questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I have a 6-node cluster running Pacific 16.2.6 with 54 x 10TB HDD and 12 x
6.4 TB NVME drives. By default, the autoscaler appears to scale down each
pool to 32 PGs, which causes a very uneven data distribution and somewhat
lower performance.

Although I knew that previously there was a neat public PG calculator, I
was unable to find a working copy and used
https://access.redhat.com/labs/cephpgc/ instead (subscription-only, I had
access through my work account).

This is the PG numbers suggested for my setup, already applied to the
cluster:

POOL PGS
images 128
volumes 1024
backups 256
vms 512
device_health_metrics 64
volumes-nvme 128
ec-volumes-meta 128
ec-volumes-data 256

I indicated that volumes will use 50% of space, vms 20%, backups 25% and
images 5%. These pools are bound to HDD storage by crush-maps. Please
disregard the ec-pools, as they're for testing purposes only and are not in
use, and volumes-nvme as this pool is bound to NVME drives.

My questions are:

1) Do these PG numbers look reasonable considering the current cluster
hardware and the number of drives?

2) Can these numbers be improved for data reliability and performance
purposes, with the view that the cluster is expected to grow at some point?

3) The PG distribution varies quite a bit, between 123 and 158 PGs per
drive. Can this be adjusted, or will the balancer adjust the distribution
in the future? At the moment it says "Unable to find further optimization,
or pool(s) pg_num is decreasing, or distribution is already perfect".
Space-wise the data distribution is quite uniform.

4) As the volumes-nvme pool is bound to NVME drives and there are only 12
such drives, they have 28-33 PGs per drive. This is much lower than the
cluster average, and I get alarms that these OSDs have a very different PG
number. Should I ignore these alarms?

I would very much appreciate any suggestions!

Best regards,
Zakhar
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux