Weird as our Giant health status was ok before upgrading to Hammer… Had read this originally when creating the cluster Well yes maybe, believe we bumped PGs per status complain in Giant mentioned explicit different pool names, eg. too few PGs in <pool-name>… so we naturally bumped mentioned pools slightly up til next 2-power until health stop complaining and yes we wondered over this relative high number of PGs in total for the cluster, as we initially had read pgcalc and thought we understood this. ceph.com not responsding presently… - are you saying one needs to consider in advance #pools in a cluster and factor this in when calculating the number of PGs? - If so, how to decide which pool gets what #PG, as this is set per pool, especially if one can’t precalc the amount objects ending up in each pool? But yes understand also that more pools means more PGs per OSD, does this imply using different pools to segregate various data f.ex. per application in same cluster is a bad idea? Using pools as sort of name space segregation makes it easy f.e. to remove/migration data per application and thus a handy segregation tool ImHO. - Are the BCP to consolidate data in few pools per cluster? /Steffen |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com