Hi,
I"m playing with this with a modest sized ceph cluster (36x6TB disks). Based on this it says that small pools (such as .users) would have just 16 PGs. Is this correct? I've historically always made even these small pools have at least as many PGs as the next power of 2 over my number of OSDs (64 in this case).
All the best,I"m playing with this with a modest sized ceph cluster (36x6TB disks). Based on this it says that small pools (such as .users) would have just 16 PGs. Is this correct? I've historically always made even these small pools have at least as many PGs as the next power of 2 over my number of OSDs (64 in this case).
On Wed, Jan 7, 2015 at 3:08 PM, Michael J. Kidd <michael.kidd@xxxxxxxxxxx> wrote:
Thanks!As an aside, we're also working to update the documentation to reflect the best practices. See Ceph.com tracker for this at:Please check it out! Happy to answer any questions, and always welcome any feedback on the tool / verbiage, etc...http://ceph.com/pgcalcHello all,Just a quick heads up that we now have a PG calculator to help determine the proper PG per pool numbers to achieve a target PG per OSD ratio.
http://tracker.ceph.com/issues/9867Michael J. Kidd- by Red Hat
Sr. Storage Consultant
Inktank Professional Services
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com