Re: pg count question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The formula seems correct for a 100 pg/OSD target.


> Le 8 août 2018 à 04:21, Satish Patel <satish.txt@xxxxxxxxx> a écrit :
> 
> Thanks!
> 
> Do you have any comments on Question: 1 ?
> 
> On Tue, Aug 7, 2018 at 10:59 AM, Sébastien VIGNERON
> <sebastien.vigneron@xxxxxxxxx> wrote:
>> Question 2:
>> 
>> ceph osd pool set-quota <poolname> max_objects|max_bytes <val>                                                              set object or byte limit on pool
>> 
>> 
>>> Le 7 août 2018 à 16:50, Satish Patel <satish.txt@xxxxxxxxx> a écrit :
>>> 
>>> Folks,
>>> 
>>> I am little confused so just need clarification, I have 14 osd in my
>>> cluster and i want to create two pool  (pool-1 & pool-2) how do i
>>> device pg between two pool with replication 3
>>> 
>>> Question: 1
>>> 
>>> Is this correct formula?
>>> 
>>> 14 * 100 / 3 / 2 =  233  ( power of 2 would be 256)
>>> 
>>> So should i give 256 PG per pool right?
>>> 
>>> pool-1 = 256 pg & pgp
>>> poo-2 = 256 pg & pgp
>>> 
>>> 
>>> Question: 2
>>> 
>>> How do i set limit on pool for example if i want pool-1 can only use
>>> 500GB and pool-2 can use rest of the space?
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux