Hello Bill,
Either 2048 or 4096 should be acceptable. 4096 gives about a 300 PG per OSD ratio, which would leave room for tripling the OSD count without needing to increase the PG number. While 2048 gives about 150 PGs per OSD, not leaving room but for about a 50% OSD count expansion.The issue with too few PGs is poor data distribution. So it's all about having enough PGs to get good data distribution without going too high and having resource exhaustion during recovery.
Michael J. Kidd
Sr. Storage Consultant
Inktank Professional Services
- by Red HatSr. Storage Consultant
Inktank Professional Services
On Wed, Jan 7, 2015 at 4:34 PM, Sanders, Bill <Bill.Sanders@xxxxxxxxxxxx> wrote:
This is interesting. Kudos to you guys for getting the calculator up, I think this'll help some folks.
I have 1 pool, 40 OSDs, and replica of 3. I based my PG count on: http://ceph.com/docs/master/rados/operations/placement-groups/
'''
Less than 5 OSDs set pg_num to 128
Between 5 and 10 OSDs set pg_num to 512
Between 10 and 50 OSDs set pg_num to 4096
'''
But the calculator gives a different result of 2048. Out of curiosity, what sorts of issues might one encounter by having too many placement groups? I understand there's some resource overhead. I don't suppose it would manifest itself in a recognizable way?
Bill
From: ceph-users [ceph-users-bounces@xxxxxxxxxxxxxx] on behalf of Michael J. Kidd [michael.kidd@xxxxxxxxxxx]
Sent: Wednesday, January 07, 2015 3:51 PM
To: Loic Dachary
Cc: ceph-users@xxxxxxxx
Subject: Re: PG num calculator live on Ceph.com
> Where is the source ?On the page.. :) It does link out to jquery and jquery-ui, but all the custom bits are embedded in the HTML.
Glad it's helpful :)
Michael J. Kidd- by Red Hat
Sr. Storage Consultant
Inktank Professional Services
On Wed, Jan 7, 2015 at 3:46 PM, Loic Dachary <loic@xxxxxxxxxxx> wrote:
On 07/01/2015 23:08, Michael J. Kidd wrote:
> Hello all,
> Just a quick heads up that we now have a PG calculator to help determine the proper PG per pool numbers to achieve a target PG per OSD ratio.
>
> http://ceph.com/pgcalc
>
> Please check it out! Happy to answer any questions, and always welcome any feedback on the tool / verbiage, etc...
Great work ! That will be immensely useful :-)
Where is the source ?
Cheers
>
> As an aside, we're also working to update the documentation to reflect the best practices. See Ceph.com tracker for this at:
> http://tracker.ceph.com/issues/9867
>
> Thanks!
> Michael J. Kidd
> Sr. Storage Consultant
> Inktank Professional Services
> - by Red Hat
>
>
--> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
Loïc Dachary, Artisan Logiciel Libre
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com