How many pools do you have? How many pgs does your total cluster have, not just your rbd pool?
ceph osd lspools ceph -s | grep -Eo '[0-9] pgs' My guess is that you have other pools with pgs and the cumulative total of pgs per osd is too many.
From: ceph-users [ceph-users-bounces@xxxxxxxxxxxxxx] on behalf of Andrus, Brian Contractor [bdandrus@xxxxxxx]
Sent: Thursday, September 22, 2016 9:33 AM To: ceph-users@xxxxxxxxxxxxxx Subject: [ceph-users] too many PGs per OSD when pg_num = 256?? All,
I am getting a warning:
health HEALTH_WARN too many PGs per OSD (377 > max 300) pool cephfs_data has many more objects per pg than average (too few pgs?)
yet, when I check the settings: # ceph osd pool get rbd pg_num pg_num: 256 # ceph osd pool get rbd pgp_num pgp_num: 256
How does something like this happen? I did create a radosgw several weeks ago and have put a single file in it for testing, but that is it. It only started giving the warning a couple days ago.
Brian Andrus ITACS/Research Computing Naval Postgraduate School Monterey, California voice: 831-656-6238
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com