David, Thanks for the info. I am getting an understanding of how this works. Now I used the ceph-deploy tool to create the rgw pools. It seems then that the tool isn’t the best at creating the pools necessary for an rgw gateway as it made all of them the default sizes for pg_num/pgp_num Perhaps, then, it is wiser to have a very low default for those so the ceph-deploy tool doesn assign a large value to something that will merely hold control or other metadata? Brian Andrus ITACS/Research Computing Naval Postgraduate School Monterey, California voice: 831-656-6238 From: David Turner [mailto:david.turner@xxxxxxxxxxxxxxxx]
You have 11 pools with 256 pgs, 1 pool with 128 and 1 pool with 64... that's 3,008 pgs in your entire cluster. Multiply that number by your replica size and divide
by how many OSDs you have in your cluster and you'll see what your average PGs per osd is. Based on the replica size you shared, that's a total number of 6,528 copies of PGs to be divided amongst the OSDS in your cluster. Your cluster will be in warning
if that number is greater than 300 per OSD, like you're seeing. When designing your cluster and how many pools, pgs, and replica size you will be setting, please consult the pgcalc tool found here
http://ceph.com/pgcalc/. You cannot reduce the number of PGs in a pool, so the easiest way to resolve this issue is mostly likely going to be destroying pools and recreating them with the proper number
of PGs.
From: ceph-users [ceph-users-bounces@xxxxxxxxxxxxxx]
on behalf of Andrus, Brian Contractor [bdandrus@xxxxxxx] Ok, this is an odd one to me… I have several pools, ALL of them are set with pg_num and pgp_num = 256. Yet, the warning about too many PGs per OSD is showing up. Here are my pools: pool 0 'rbd' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 256 pgp_num 256 last_change 134 flags hashpspool stripe_width 0 pool 1 'cephfs_data' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 256 pgp_num 256 last_change 203 flags hashpspool crash_replay_interval 45 stripe_width 0 pool 2 'cephfs_metadata' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 196 flags hashpspool stripe_width 0 pool 3 'vmimages' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 213 flags hashpspool stripe_width 0 removed_snaps [1~3] pool 25 '.rgw.root' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 256 pgp_num 256 last_change 6199 flags hashpspool stripe_width 0 pool 26 'default.rgw.control' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 256 pgp_num 256 last_change 6202 flags hashpspool stripe_width 0 pool 27 'default.rgw.data.root' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 256 pgp_num 256 last_change 6204 flags hashpspool stripe_width 0 pool 28 'default.rgw.gc' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 256 pgp_num 256 last_change 6205 flags hashpspool stripe_width 0 pool 29 'default.rgw.log' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 256 pgp_num 256 last_change 6206 flags hashpspool stripe_width 0 pool 30 'default.rgw.users.uid' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 256 pgp_num 256 last_change 6211 flags hashpspool stripe_width 0 pool 31 'default.rgw.meta' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 256 pgp_num 256 last_change 6214 flags hashpspool stripe_width 0 pool 32 'default.rgw.buckets.index' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 256 pgp_num 256 last_change 6216 flags hashpspool stripe_width 0 pool 33 'default.rgw.buckets.data' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 256 pgp_num 256 last_change 6218 flags hashpspool stripe_width 0 so why would the warning show up, and how do I get it to go away and stay away? Brian Andrus ITACS/Research Computing Naval Postgraduate School Monterey, California voice: 831-656-6238 |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com