Re: Calculating required number of PGs per pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Karan,

Surely this doesn't apply to all pools though?  Several of the pools created for the RADOS gateway have very small levels of objects and if I set 256 PGs to all pools I would have warnings about the ratio of objects to pgs.

Best regards
 

Graeme Lambert


 
On 27/01/14 11:04, Karan Singh wrote:
Hello Graeme

Based on your scenario

6 OSD , need rep size of 3

Number of PG per POOL should be = 200 =  256 ( final value , roundup formula answer to the next power of 2 )  

So , 256 PG & PGP_num / Pool you should go with.

Remember , more the number of OSD , will lead to more number of PG per POOL , leads to better performance + cluster health. So if you can increase number of OSD it would be a good deal.

Many Thanks
Karan Singh



From: "Sherry Shahbazi" <shoosah@xxxxxxxxx>
To: "Graeme Lambert" <glambert@xxxxxxxxxxx>, "Ирек Фасихов" <malmyzh@xxxxxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx
Sent: Sunday, 26 January, 2014 12:09:03 AM
Subject: Re: Calculating required number of PGs per pool

Hi Graeme, 

I think you need to have around 600 PGs per pool since 1200 is the number of PGs per cluster and the number of PGs should be divided up between pools in the cluster.

PS. Make sure to set pgp_num the same as the pg_num per pool. In your case it is going to be:
ceph osd pool set pool_name1 pg_num 600
ceph osd pool set pool_name1 pgp_num 600
ceph osd pool set pool_name2 pg_num 600
ceph osd pool set pool_name2 pgp_num 600

Thanks
Sherry


On Friday, January 24, 2014 11:13 PM, Graeme Lambert <glambert@xxxxxxxxxxx> wrote:
Hi,

I have read this, however it is contradicting if I'm understanding it correctly.

This page
http://ceph.com/docs/master/rados/configuration/pool-pg-config-ref/ has:

(100 * #OSDs) / 3 = 200 as it would work out for me as the max PGs per OSD.

So that gives me 1200 PGs to split across all of my pools?

OR, is it a maximum of 200 per pool?
Best regards
 
Graeme Lambert
Adepteo Logo
Address: Adepteo Limited
24 Market Street
Tottington
Bury
BL8 4AD
Web: http://www.adepteo.net
Tel/SMS: 0161 710 3000 - Switchboard 
Tel/SMS: 0161 710 2000 - Support
Fax: 0161 710 3019
Find us on Follow adepteo on Twitter Follow adepteo on Facebook Follow adepteo on LinkedIN

Finalist 2012
 
 
On 24/01/14 09:51, ???????? ?????????????? wrote:


2014/1/24 Graeme Lambert <glambert@xxxxxxxxxxx>
Hi,

I've got 6 OSDs and I want 3 replicas per object, so following the function that's 200 PGs per OSD, which is 1,200 overall.

I've got two RBD pools and the .rgw.buckets pool that are considerably higher in the number of objects it has compared to the others (given that RADOS gateway needs so many pools with very little in them).

How can I get the ratio of PGs to objects to be consistent across all of the pools to avoid the health warnings of:

HEALTH_WARN pool [pool-name] has too few pgs
--
Best regards
 
Graeme


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux