Re: PG num calculator live on Ceph.com

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I"m playing with this with a modest sized ceph cluster (36x6TB disks). Based on this it says that small pools (such as .users) would have just 16 PGs. Is this correct? I've historically always made even these small pools have at least as many PGs as the next power of 2 over my number of OSDs (64 in this case).

All the best,

~ Christopher

On Wed, Jan 7, 2015 at 3:08 PM, Michael J. Kidd <michael.kidd@xxxxxxxxxxx> wrote:
Hello all,
  Just a quick heads up that we now have a PG calculator to help determine the proper PG per pool numbers to achieve a target PG per OSD ratio. 

http://ceph.com/pgcalc

Please check it out!  Happy to answer any questions, and always welcome any feedback on the tool / verbiage, etc...

As an aside, we're also working to update the documentation to reflect the best practices.  See Ceph.com tracker for this at:
http://tracker.ceph.com/issues/9867

Thanks!
Michael J. Kidd
Sr. Storage Consultant
Inktank Professional Services
 - by Red Hat

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux