Re: pg_num docs conflict with Hammer PG count warning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Aug 6, 2015 at 1:55 PM, Hector Martin <hector@xxxxxxxxxxxxxx> wrote:
> On 2015-08-06 17:18, Wido den Hollander wrote:
>>
>> The mount of PGs is cluster wide and not per pool. So if you have 48
>> OSDs the rule of thumb is: 48 * 100 / 3 = 1600 PGs cluster wide.
>>
>> Now, with enough memory you can easily have 100 PGs per OSD, but keep in
>> mind that the PG count is cluster-wide and not per pool.
>
>
> I understand that now, but that is not what the docs say. The docs say 4096
> PGs per pool (i.e. in the "ceph osd pool create" command) for 48 OSDs. Which
> seems to be off by a factor of 2.5x from the actual do-the-math
> recommendation for one 3x pool, and successively larger factors as you add
> pools.
>

4096 was the count considering *all* pools in mind.. since you have 4
pools you should consider reducing the number. Also follow the rule
Wido said in the earlier mail for calculation ie n_OSD*100/3 . BTW
http://ceph.com/pgcalc/ might help you in selecting this number
better.
______
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux