On Sat, Jan 3, 2015 at 8:53 PM, Christian Balzer <chibi@xxxxxxx> wrote: > On Sat, 3 Jan 2015 16:21:29 +1000 Lindsay Mathieson wrote: > >> I just added 4 OSD's to my 2 OSD "cluster" (2 Nodes, now have 3 OSD's per >> node). >> >> Given its the weekend and not in use, I've set them all to weight 1, but >> looks like it going to take a while to rebalance ... :) >> >> Is having them all at weight 1 the fastest way to get back to health, or >> is it causing contention? >> > Well, your cluster finished rebuilding already. > To minimize the impact adding one OSD at a time (and maybe increasing its > weight gradually) is the way to go, but that of course will take the > longest as data gets shuffled around over and over again. > > What you did causes the least amount of data movement in total, so despite > stressing everything and certainly causing contention in some components, > it is likely the fastest approach. > >> Current health: >> >> ceph -s >> cluster f67ef302-5c31-425d-b0fe-cdc0738f7a62 >> health HEALTH_WARN 227 pgs backfill; 2 pgs backfilling; 97 pgs >> degraded; 29 pgs recovering; 68 pgs recovery_wait; 97 pgs stuck degraded; >> 326 pgs stuck unclean; recovery 30464/943028 objects degraded (3.230%); > ^^^^^^^^ > This is a question for the Ceph developers, I was under the impression that > with Giant adding OSDs would just result in misplaced objects, not > degraded ones... Umm, yeah, that should be the case. I can't tell what might have happened just from the ceph status report, though, and there are lots of things a user could do to induce that state. ;) -Greg _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com