PG Sizing Question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello



Looking to get some official guidance on PG and PGP sizing.



Is the goal to maintain approximately 100 PGs per OSD per pool or for the
cluster general?



Assume the following scenario:



Cluster with 80 OSD across 8 nodes;

3 Pools:

-       Pool1 = Replicated 3x

-       Pool2 = Replicated 3x

-       Pool3 = Erasure Coded 6-4





Assuming the well published formula:



Let (Target PGs / OSD) = 100



[ (Target PGs / OSD) * (# of OSDs) ] / (Replica Size)



-       Pool1 = (100*80)/3 = 2666.67 => 4096

-       Pool2 = (100*80)/3 = 2666.67 => 4096

-       Pool3 = (100*80)/10 = 800 => 1024



Total cluster would have 9216 PGs and PGPs.


Are there any implications (performance / monitor / MDS / RGW sizing) with
how many PGs are created on the cluster?



Looking for validation and / or clarification of the above.



Thank you.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux