Re: Correct number of pg

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Wonderful, we will leave our pg at 4096 :)

many thanks for the advice Paul :)

have a good day,

Jake

On 8/19/19 11:03 AM, Paul Emmerich wrote:
> On Mon, Aug 19, 2019 at 10:51 AM Jake Grimmett <jog@xxxxxxxxxxxxxxxxx> wrote:
>>
>> Dear All,
>>
>> We have a new Nautilus cluster, used for cephfs, with pg_autoscaler in
>> warn mode.
>>
>> Shortly after hitting 62% full, the autoscaler started warning that we
>> have too few pg:
>>
>> *********************************************************
>>     Pool ec82pool has 4096 placement groups, should have 16384
>> *********************************************************
>>
>> The pool is 62% full, we have 450 OSD, and are using 8 k=8 m=2 Erasure
>> encoding.
>>
>> Does 16384 pg seem reasonable?
> 
> no, that would be a horrible value for a cluster of that size, 4096 is
> perfect here.
> 
> 
> Paul
> 
>>
>> The on-line pg calculator suggests 4096...
>>
>> https://ceph.io/pgcalc/
>>
>> (Size = 10, OSD=450, %Data=100, Target OSD 100)
>>
>> many thanks,
>>
>> Jake
>>
>> --
>> MRC Laboratory of Molecular Biology
>> Francis Crick Avenue,
>> Cambridge CB2 0QH, UK.
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
-- 
MRC Laboratory of Molecular Biology
Francis Crick Avenue,
Cambridge CB2 0QH, UK.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux