Hi, With your config you must have an avg 400 PGs per OSD. Do you find peering/backfilling/recovery to be responsive? How is the CPU and memory usage of your OSDs during backfilling? Cheers, Dan -- Dan van der Ster || Data & Storage Services || CERN IT Department -- -------- Original Message -------- From: "McNamara, Bradley" <Bradley.McNamara@xxxxxxxxxxx> Sent: Thursday, March 13, 2014 08:03 PM To: ceph-users@xxxxxxxxxxxxxx Subject: PG Calculations There was a very recent thread discussing PG calculations, and it made me doubt my cluster setup. So, Inktank, please provide some clarification.
I followed the documentation, and interpreted that documentation to mean that PG and PGP calculation was based upon a per-pool calculation. The recent discussion introduced a slightly different formula adding in the total number of pools:
# OSD * 100 / 3
vs.
# OSD’s * 100 / (3 * # pools)
My current cluster has 24 OSD’s, replica size of 3, and the standard three pools, RBD, DATA, and METADATA. My current total PG’s is 3072, which by the second formula is way too many. So, do I have too many? Does it need to be addressed, or can it wait until I add more OSD’s, which will bring the ratio closer to ideal? I’m currently using only RBD and CephFS, no RadosGW.
Thank you!
Brad |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com