Re: CHOOSING THE NUMBER OF PLACEMENT GROUPS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



PGs per pool also has a lot to do with how much data each pool will have.  If 1 pool will have 90% of the data, it should have 90% of the PGs.  If it will be common for you to create and delete pools (not usually common and probably something you can do simpler), then you can aim to start at a minimum recommendation and stay between that and the recommended amount.  So something like 40 < You < 100 PGs/osd.

Most things you can do in Ceph are CephFS, RBD, and RGW.  There are few times you need to set up multiple RBD pools as you can create thousands of RBDs in 1 pool, CephFS is more common, but still you can set up securities so that each user only has access to a subfolder and not the entire FS directory tree, etc.  There are generally ways to configure things so that you don't need new pools every time someone has a storage need.

On Fri, Mar 9, 2018 at 6:31 AM Caspar Smit <casparsmit@xxxxxxxxxxx> wrote:
Hi Will,

Yes, adding new pools will increase the number of PG's per OSD. But you can always decrease the number of pg's per OSD by adding new Hosts/OSD's.

When you design a cluster you have to calculate how many pools you're going to use and use that information with PGcalc. (https://ceph.com/pgcalc/)

If you add pools later on they were not part of the original design and you probably will need additional space (OSD's) too.

Kind regards,
Caspar

2018-03-09 11:05 GMT+01:00 Will Zhao <zhao6305@xxxxxxxxx>:
Hi Janne:
    Thanks for your response. Approximately 100 PGs per OSD, yes, I
missed out this part.
I am still a little confused. Because 100-PGs-per-OSD rule is the
result of sumation of all used pools .
I konw I can create many pools.Assume that I have 5 pools now , and
the rule has already been met.
So if I create the sixth pool,  the total PGs will increased , then
the PGs per OSD will be more then 100.
Will this not violate the rule ?


On Fri, Mar 9, 2018 at 5:40 PM, Janne Johansson <icepic.dz@xxxxxxxxx> wrote:
>
>
> 2018-03-09 10:27 GMT+01:00 Will Zhao <zhao6305@xxxxxxxxx>:
>>
>> Hi all:
>>
>>      I have a tiny question. I have read the documents, and it
>> recommend approximately 100 placement groups for normal usage.
>
>
> Per OSD. Approximately 100 PGs per OSD, when all used pools are summed up.
> For things like radosgw, let it use the low defaults (8?) and then expand on
> the pools
> that actually see a lot of data getting into them, leave the rest as is.
>
>
>>
>> Because the pg num can not be decreased, so if in current cluster,
>> the pg num have met this rule, and when I try to create a new pool ,
>> what pg num I should set ? I think no matter what I do , it  will
>> violate the pg-num-rule, add burden to osd.  This means , if I want my
>>  cluster be used by many different users, I should bulid a new cluster
>> for new user ?
>>
>
> No, one cluster can serve a lot of clients. You can have lots of pools if
> you need,
> and those pools can have separate OSD hosts serving them if you need strong
> separation, but still managed from the same cluster.
>
> --
> May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux