Unfortunately this may cause peering process to be twice as long than regular, and production cluster will suffer hardly on it. It would be wonderful if the solution to such problems and related (non-idempotent-alike behavior of the mon quorum which I reported about a week ago) will be presented in dumpling or on its -fix releases, considering it to be ultimatively useful for production systems where tight time requirements meets the Ceph behavior. On Wed, Jul 24, 2013 at 2:15 AM, Gregory Farnum <greg@xxxxxxxxxxx> wrote: > Yeah, this is because right now when you mark an OSD out the weights > of the buckets above it aren't changing. I guess conceivably we could > set it up to do so, hrm... > In any case, if this is inconvenient you can do something like unlink > the OSD right after you mark it out; that should update the CRUSH map > to its final state so you don't see any more movement. > -Greg > Software Engineer #42 @ http://inktank.com | http://ceph.com > > > On Tue, Jul 23, 2013 at 3:07 PM, Andrey Korolyov <andrey@xxxxxxx> wrote: >> Hello >> >> I had a couple of osds with down+out state and completely clean >> cluster, but after pushing a button ``osd crush remove'' there was >> some data redistribution with shift proportional to osd weight in the >> crushmap but lower than 'osd out' amount of data replacement over osd >> with same weight approximate two times. This is some sort of >> non-idempotency kept at least from bobtail series. >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com