On Fri, Jul 27, 2012 at 6:07 AM, Yann Dupont <Yann.Dupont@xxxxxxxxxxxxxx> wrote: > My ceph cluster is made of 8 OSD with quite big storage attached. > All OSD nodes are equal, except 4 OSD have 6,2 TB, 4 have 8 TB storage. Sounds like you should just set the weights yourself, based on the capacities you listed here. Even then, you only have 8 OSDs. The data placement is essentially stochastic, you may not get perfect balance with a small cluster. CRUSH evens out on larger clusters quite nicely, but there's still a lot of statistical variation in the picture. -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html