Re: num of placement groups created for default pools

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the explanation that makes sense.
Tim

-----Original Message-----
From: Tyler Brekke [mailto:tyler.brekke@xxxxxxxxxxx] 
Sent: Thursday, October 24, 2013 6:42 AM
To: Snider, Tim
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  num of placement groups created for default pools

Hey Tim,

If you deployed with ceph-deploy then your monitors started without knowledge of how many OSDs you will be adding to your cluster. You can add  'osd_pool_default_pg_num' and 'osd_pool_default_pgp_num' to you ceph.conf before creating your monitors to have the default pools created with the proper number of placement groups.

I believe with the old mkcephfs script the number of osds was used to give a better default pg count. I don't think this is really necessary anymore as you can increase your placement group size now.

ceph osd pool set <poolname> pg_num <num> ceph osd pool set <poolname> pgp_num <num>

On Wed, Oct 23, 2013 at 6:13 AM, Snider, Tim <Tim.Snider@xxxxxxxxxx> wrote:
> I have a newly created cluster with 68 osds and the default of 2 replicas. The default pools  are created with 64 placement groups . The documentation in http://ceph.com/docs/master/rados/operations/pools/ states  for osd pool creation :
> "We recommend approximately 50-100 placement groups per OSD to balance out memory and CPU requirements and per-OSD load. For a single pool of objects, you can use the following formula: Total PGS = (osds *100)/Replicas"
>
> For this cluster pools should have  3200 pgs [ (64*100)/2] according to the recommendation.
> Why isn't  the guideline followed for default pools?
> Maybe they're created prior to having all the osds activated?
> Maybe I'm reading the documentation incorrectly.
>
> /home/ceph/bin# ceph osd getmaxosd
> max_osd = 68 in epoch 219
> /home/ceph/bin# ceph osd getmaxosd
> max_osd = 68 in epoch 219
> /home/ceph/bin# ceph osd lspools
> 0 data,1 metadata,2 rbd,
> /home/ceph/bin# ceph osd pool get data pg_num
> pg_num: 64
> /home/ceph/bin# ceph osd pool get data size
> size: 2
>
> Thanks,
> Tim
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux