Re: pg count question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Folks,


I used your link to calculate PGs and i did following.

Total OSD: 14
Replica: 3
Total Pools: 2  ( Images & vms)  In %Data i gave 5% to images & 95% to
vms (openstack)

https://ceph.com/pgcalc/

It gave me following result

vms  -  512 PG
images - 16 PG

For safe side i set vms 256 PG is that a good idea because you can
increase pg but you can't reduce PG so i want to start with smaller
and later i have room to increase pg, i just don't want to commit
bigger which cause other performance issue, Do you think my approach
is right or i should set 512 now

On Fri, Aug 10, 2018 at 9:23 AM, Satish Patel <satish.txt@xxxxxxxxx> wrote:
> Re-sending it, because i found my i lost membership so wanted to make
> sure, my email went through
>
> On Fri, Aug 10, 2018 at 7:07 AM, Satish Patel <satish.txt@xxxxxxxxx> wrote:
>> Thanks,
>>
>> Can you explain about %Data field in that calculation, is this total data
>> usage for specific pool or total ?
>>
>> For example
>>
>> Pool-1 is small so should I use 20%
>> Pool-2 is bigger so I should use 80%
>>
>> I'm confused there so can you give me just example how to calculate that
>> field?
>>
>> Sent from my iPhone
>>
>> On Aug 9, 2018, at 4:25 PM, Subhachandra Chandra <schandra@xxxxxxxxxxxx>
>> wrote:
>>
>> I have used the calculator at https://ceph.com/pgcalc/ which looks at
>> relative sizes of pools and makes a suggestion.
>>
>> Subhachandra
>>
>> On Thu, Aug 9, 2018 at 1:11 PM, Satish Patel <satish.txt@xxxxxxxxx> wrote:
>>>
>>> Thanks Subhachandra,
>>>
>>> That is good point but how do i calculate that PG based on size?
>>>
>>> On Thu, Aug 9, 2018 at 1:42 PM, Subhachandra Chandra
>>> <schandra@xxxxxxxxxxxx> wrote:
>>> > If pool1 is going to be much smaller than pool2, you may want more PGs
>>> > in
>>> > pool2 for better distribution of data.
>>> >
>>> >
>>> >
>>> >
>>> > On Wed, Aug 8, 2018 at 12:40 AM, Sébastien VIGNERON
>>> > <sebastien.vigneron@xxxxxxxxx> wrote:
>>> >>
>>> >> The formula seems correct for a 100 pg/OSD target.
>>> >>
>>> >>
>>> >> > Le 8 août 2018 à 04:21, Satish Patel <satish.txt@xxxxxxxxx> a écrit :
>>> >> >
>>> >> > Thanks!
>>> >> >
>>> >> > Do you have any comments on Question: 1 ?
>>> >> >
>>> >> > On Tue, Aug 7, 2018 at 10:59 AM, Sébastien VIGNERON
>>> >> > <sebastien.vigneron@xxxxxxxxx> wrote:
>>> >> >> Question 2:
>>> >> >>
>>> >> >> ceph osd pool set-quota <poolname> max_objects|max_bytes <val>
>>> >> >> set object or byte limit on pool
>>> >> >>
>>> >> >>
>>> >> >>> Le 7 août 2018 à 16:50, Satish Patel <satish.txt@xxxxxxxxx> a écrit
>>> >> >>> :
>>> >> >>>
>>> >> >>> Folks,
>>> >> >>>
>>> >> >>> I am little confused so just need clarification, I have 14 osd in
>>> >> >>> my
>>> >> >>> cluster and i want to create two pool  (pool-1 & pool-2) how do i
>>> >> >>> device pg between two pool with replication 3
>>> >> >>>
>>> >> >>> Question: 1
>>> >> >>>
>>> >> >>> Is this correct formula?
>>> >> >>>
>>> >> >>> 14 * 100 / 3 / 2 =  233  ( power of 2 would be 256)
>>> >> >>>
>>> >> >>> So should i give 256 PG per pool right?
>>> >> >>>
>>> >> >>> pool-1 = 256 pg & pgp
>>> >> >>> poo-2 = 256 pg & pgp
>>> >> >>>
>>> >> >>>
>>> >> >>> Question: 2
>>> >> >>>
>>> >> >>> How do i set limit on pool for example if i want pool-1 can only
>>> >> >>> use
>>> >> >>> 500GB and pool-2 can use rest of the space?
>>> >> >>> _______________________________________________
>>> >> >>> ceph-users mailing list
>>> >> >>> ceph-users@xxxxxxxxxxxxxx
>>> >> >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> >> >>
>>> >>
>>> >> _______________________________________________
>>> >> ceph-users mailing list
>>> >> ceph-users@xxxxxxxxxxxxxx
>>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> >
>>> >
>>
>>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux