Nothing weird, I have incomplete data and data based on bad rounding errors. Your cluster has too many pgs and most of your pools will likely need to be recreated with less. Have
you poked around on the pg calc tool?
From: Andrus, Brian Contractor [bdandrus@xxxxxxx]
Sent: Thursday, September 22, 2016 11:52 AM To: David Turner; ceph-users@xxxxxxxxxxxxxx Subject: RE: too many PGs per OSD when pg_num = 256?? Hmm. Something happened then. I only have 20 OSDs. What may cause that?
Brian Andrus ITACS/Research Computing Naval Postgraduate School Monterey, California voice: 831-656-6238
From: David Turner [mailto:david.turner@xxxxxxxxxxxxxxxx]
So you have 3,520 pgs. Assuming all of your pools are using 3 replicas, and using the 377 pgs/osd in your health_warn state, that would mean your cluster has
28 osds.
From: Andrus, Brian Contractor [bdandrus@xxxxxxx] David, I have 15 pools: # ceph osd lspools|sed 's/,/\n/g' 0 rbd 1 cephfs_data 2 cephfs_metadata 3 vmimages 14 .rgw.root 15 default.rgw.control 16 default.rgw.data.root 17 default.rgw.gc 18 default.rgw.log 19 default.rgw.users.uid 20 default.rgw.users.keys 21 default.rgw.users.email 22 default.rgw.meta 23 default.rgw.buckets.index 24 default.rgw.buckets.data # ceph -s | grep -Eo '[0-9]+ pgs' 3520 pgs
Brian Andrus ITACS/Research Computing Naval Postgraduate School Monterey, California voice: 831-656-6238
From: David Turner [mailto:david.turner@xxxxxxxxxxxxxxxx]
Forgot the + for the regex.
From: David Turner How many pools do you have? How many pgs does your total cluster have, not just your rbd pool? From: ceph-users [ceph-users-bounces@xxxxxxxxxxxxxx]
on behalf of Andrus, Brian Contractor [bdandrus@xxxxxxx] All,
I am getting a warning:
health HEALTH_WARN too many PGs per OSD (377 > max 300) pool cephfs_data has many more objects per pg than average (too few pgs?)
yet, when I check the settings: # ceph osd pool get rbd pg_num pg_num: 256 # ceph osd pool get rbd pgp_num pgp_num: 256
How does something like this happen? I did create a radosgw several weeks ago and have put a single file in it for testing, but that is it. It only started giving the warning a couple days ago.
Brian Andrus ITACS/Research Computing Naval Postgraduate School Monterey, California voice: 831-656-6238
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com