Re: CHOOSING THE NUMBER OF PLACEMENT GROUPS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Janne:
    Thanks for your response. Approximately 100 PGs per OSD, yes, I
missed out this part.
I am still a little confused. Because 100-PGs-per-OSD rule is the
result of sumation of all used pools .
I konw I can create many pools.Assume that I have 5 pools now , and
the rule has already been met.
So if I create the sixth pool,  the total PGs will increased , then
the PGs per OSD will be more then 100.
Will this not violate the rule ?


On Fri, Mar 9, 2018 at 5:40 PM, Janne Johansson <icepic.dz@xxxxxxxxx> wrote:
>
>
> 2018-03-09 10:27 GMT+01:00 Will Zhao <zhao6305@xxxxxxxxx>:
>>
>> Hi all:
>>
>>      I have a tiny question. I have read the documents, and it
>> recommend approximately 100 placement groups for normal usage.
>
>
> Per OSD. Approximately 100 PGs per OSD, when all used pools are summed up.
> For things like radosgw, let it use the low defaults (8?) and then expand on
> the pools
> that actually see a lot of data getting into them, leave the rest as is.
>
>
>>
>> Because the pg num can not be decreased, so if in current cluster,
>> the pg num have met this rule, and when I try to create a new pool ,
>> what pg num I should set ? I think no matter what I do , it  will
>> violate the pg-num-rule, add burden to osd.  This means , if I want my
>>  cluster be used by many different users, I should bulid a new cluster
>> for new user ?
>>
>
> No, one cluster can serve a lot of clients. You can have lots of pools if
> you need,
> and those pools can have separate OSD hosts serving them if you need strong
> separation, but still managed from the same cluster.
>
> --
> May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux