Hi David,
Thanks for the explanation!
I'll make a search on how much data each pool will use.
Thanks!
David Turner <drakonstein@xxxxxxxxx> 于2018年10月18日周四 下午9:26写道:
Not all pools need the same amount of PGs. When you get to so many pools you want to start calculating how much data each pool will have. If 1 of your pools will have 80% of your data in it, it should have 80% of your PGs. The metadata pools for rgw likely won't need more than 8 or so PGs each. If your rgw data pool is only going to have a little scratch data, then it won't need very many PGs either.On Tue, Oct 16, 2018, 3:35 AM Zhenshi Zhou <deaderzzs@xxxxxxxxx> wrote:Hi,_______________________________________________I have a cluster serving rbd and cephfs storage for a period oftime. I added rgw in the cluster yesterday and wanted it to serverobject storage. Everything seems good.What I'm confused is how to calculate the pg/pgp number. As weall know, the formula of calculating pgs is:Total PGs = ((Total_number_of_OSD * 100) / max_replication_count) / pool_countBefore I created rgw, the cluster had 3 pools(rbd, cephfs_data, cephfs_meta).But now it has 8 pools, which object service may use, including '.rgw.root','default.rgw.control', 'default.rgw.meta', 'default.rgw.log' and 'defualt.rgw.buckets.index'.Should I calculate pg number again using new pool number as 8, or should Icontinue to use the old pg number?
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com