Ah, so I've been doing it wrong all this time (I thought we had to take the size multiple into account ourselves).
Thanks!
Thanks!
On Wed, Jan 7, 2015 at 4:25 PM, Michael J. Kidd <michael.kidd@xxxxxxxxxxx> wrote:
Thanks,Hello Christopher,Keep in mind that the PGs per OSD (and per pool) calculations take into account the replica count ( pool size= parameter ). So, for example.. if you're using a default of 3 replicas.. 16 * 3 = 48 PGs which allows for at least one PG per OSD on that pool. Even with a size=2, 32 PGs total still gives very close to 1 PG per OSD. Being that it's such a low utilization pool, this is still sufficient.Michael J. Kidd- by Red Hat
Sr. Storage Consultant
Inktank Professional ServicesOn Wed, Jan 7, 2015 at 3:17 PM, Christopher O'Connell <cjo@xxxxxxxxxxxxxx> wrote:~ ChristopherHi,All the best,
I"m playing with this with a modest sized ceph cluster (36x6TB disks). Based on this it says that small pools (such as .users) would have just 16 PGs. Is this correct? I've historically always made even these small pools have at least as many PGs as the next power of 2 over my number of OSDs (64 in this case).On Wed, Jan 7, 2015 at 3:08 PM, Michael J. Kidd <michael.kidd@xxxxxxxxxxx> wrote:_______________________________________________Thanks!As an aside, we're also working to update the documentation to reflect the best practices. See Ceph.com tracker for this at:Please check it out! Happy to answer any questions, and always welcome any feedback on the tool / verbiage, etc...http://ceph.com/pgcalcHello all,Just a quick heads up that we now have a PG calculator to help determine the proper PG per pool numbers to achieve a target PG per OSD ratio.
http://tracker.ceph.com/issues/9867Michael J. Kidd- by Red Hat
Sr. Storage Consultant
Inktank Professional Services
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com