On Thu, Nov 19, 2015 at 8:56 PM, Richard Gray <richard.gray@xxxxxxxxxxxx> wrote: > Hi, > > Running 'health detail' on our Ceph cluster this morning, I notice a warning > about one of the pools having significantly more objects per placement group > than the cluster average. > > ceph> health detail > HEALTH_WARN pool cas_backup has too few pgs > pool cas_backup objects per pg (2849) is more than 26.1376 times cluster > average (109) > > For our cluster, I think this situation is more or less normal. The > mentioned pool backs an rbd service (about 1TB of data), and we also have a > number of tiny pools relating to a radosgw service. We only have three OSDs, > so we've gone with 128 placement groups as recommended in the documentation. > > I understand from http://tracker.ceph.com/issues/8103 that I could make the > warning go away by adjusting the " mon pg warn max object skew" parameter > upwards, or by setting it to zero to disable the warning altogether. > > Is there a reason this would be a bad idea, and if so, is there a more > sensible approach to dealing with the warning? I'd like to be able to > actively monitor the cluster status, and would prefer to address this > warning rather than ignore it, if possible. You probably just want to increase the ignore warning, assuming you've got a decent data balance. This is trying to keep you from getting into a situation where you have 5 pools of the same PG size but wildly varying data sizes, as that makes it a lot more likely some of your OSDs will have dramatically more data than the others. In this case you've apparently just created all of your pools with the same PG count, which should be fine at your scale but means the RGW ones are much larger than there's really any point to having. :) -Greg _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com