Re: Calculating PG in an mixed environment

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Martin,
  The proper way is to perform the following process:

For all Pools utilizing the same bucket of OSDs:
(Pool1_pg_num * Pool1_size) + (Pool2_pg_num * Pool2_size) + ... (Pool(n)_pg_num * Pool(n)_size)
--------------------------------------------------------------------------------------------------------------------------------------
OSD count

This value should be between 100 and 200 PGs and is the actual ratio of PGs per OSD in that bucket of OSDs.

For the actual recommendation from Ceph Devs (and written by myself), please see:
http://ceph.com/pgcalc/

NOTE: The tool is partially broken, but the explanation at the top/bottom is sound.  I'll work to get the tool fully functional again.

Thanks,

Michael J. Kidd
Sr. Software Maintenance Engineer
Red Hat Ceph Storage
+1 919-442-8878

On Tue, Mar 15, 2016 at 11:41 AM, Martin Palma <martin@xxxxxxxx> wrote:
Hi all,

The documentation [0] gives us the following formula for calculating
the number of PG if the cluster is bigger than 50 OSDs:

                     (OSDs * 100)
Total PGs =  ------------
                     pool size

When we have mixed storage server (HDD disks and SSD disks) and we
have defined different roots in our crush map to map some pools only
to HDD disk and some to SSD disks like described by Sebastien Han [1].

In the above formula what number of OSDs should be use to calculate
the  PGs for a pool only on the HDD disks? The total number of OSDs in
a cluster or only the number of OSDs which have an HDD disk as
backend?

Best,
Martin


[0] http://docs.ceph.com/docs/master/rados/operations/placement-groups/#choosing-the-number-of-placement-groups
[1] http://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux