On Sat, Dec 3, 2011 at 6:21 AM, Guido Winkelmann <guido-ceph@xxxxxxxxxxxxxxxxx> wrote: > On Friday 02 December 2011 11:29:41 Samuel Just wrote: >> Guido, >> >> Sorry for the confusion, you hit a bug where the default map for a >> cluster with one osd contains no pgs. 0.39 (which will be released >> today) will have a fix. > > Really? Then why does the output of ceph -s below mention 6 pgs? There are a couple of different categories of PGs; the 6 that exist are "local" PGs which are tied to a specific OSD. However, those aren't actually used in a standard Ceph configuration. > BTW, that's another aspect where the documentation is a bit lacking right now. > I've found a page telling me how to change the number of pgs, but I couldn't > find any explanation so far what a pg actually is, or why I should want to > change their number... PG = "placement group". When placing data in the cluster, objects are mapped into PGs, and those PGs are mapped onto OSDs. We use the indirection so that we can group objects, which reduces the amount of per-object metadata we need to keep track of and processes we need to run (it would be prohibitively expensive to track eg the placement history on a per-object basis). Increasing the number of PGs can reduce the variance in per-OSD load across your cluster, but each PG requires a bit more CPU and memory on the OSDs that are storing it. We try and ballpark it at 100 PGs/OSD, although it can vary widely without ill effects depending on your cluster. You hit a bug in how we calculate the initial PG number from a cluster description. -Greg -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html