Hi Karan,
Surely this doesn't apply to all pools though? Several of the
pools created for the RADOS gateway have very small levels of
objects and if I set 256 PGs to all pools I would have warnings
about the ratio of objects to pgs.
Best regards
Graeme Lambert
On 27/01/14 11:04, Karan Singh wrote:
Hello Graeme
Based on your scenario
6 OSD , need rep size of 3
Number of PG per POOL should be = 200 = 256 ( final value
, roundup formula answer to the next power of 2 )
So , 256 PG & PGP_num / Pool you should go with.
Remember , more the number of OSD , will lead to more
number of PG per POOL , leads to better performance + cluster
health. So if you can increase number of OSD it would be a
good deal.
Many Thanks
Karan Singh
From:
"Sherry Shahbazi" <shoosah@xxxxxxxxx>
To: "Graeme Lambert" <glambert@xxxxxxxxxxx>,
"Ирек Фасихов" <malmyzh@xxxxxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx
Sent: Sunday, 26 January, 2014 12:09:03 AM
Subject: Re: Calculating required number
of PGs per pool
Hi Graeme,
I think you need to have
around 600 PGs per pool since 1200 is the number of PGs
per cluster and the number of PGs should be divided up
between pools in the cluster.
PS. Make sure to set
pgp_num the same as the pg_num per pool. In your case it
is going to be:
ceph osd pool set
pool_name1 pg_num 600
ceph osd pool set
pool_name1 pgp_num 600
ceph osd pool set
pool_name2 pg_num 600
ceph osd pool set
pool_name2 pgp_num 600
Thanks
Sherry
Hi,
I have read this, however it is contradicting
if I'm understanding it correctly.
This page
http://ceph.com/docs/master/rados/configuration/pool-pg-config-ref/
has:
(100 * #OSDs) / 3 = 200 as it would work out for
me as the max PGs per OSD.
So that gives me 1200 PGs to split across all of
my pools?
OR, is it a maximum of 200 per pool?
Best regards
Graeme Lambert
|
Address: Adepteo Limited
24 Market Street
Tottington
Bury
BL8 4AD
|
Web: http://www.adepteo.net
Tel/SMS: 0161 710 3000 -
Switchboard
Tel/SMS: 0161 710 2000 -
Support
Fax: 0161 710 3019
|
Finalist 2012
|
On 24/01/14 09:51, ???????? ??????????????
wrote:
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com