On 9 February 2017 at 00:11, Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote: > On Wed, Feb 8, 2017 at 1:50 PM, John Spray <jspray@xxxxxxxxxx> wrote: >> On Wed, Feb 8, 2017 at 12:41 PM, Blair Bethwaite >> <blair.bethwaite@xxxxxxxxx> wrote: >>> Hi John, >>> >>> ceph osd set nobackfill/norecover/norebalance ? >>> >>> It's not something you want to accidentally leave set, but is use >>> nonetheless - I'm using it right at this moment to load an edited >>> crushmap and examine the PG remapping impact before actually pulling >>> the trigger and letting things sort themselves out (if I decide not to >>> I can always re-inject the previous/current crushmap). >> >> Ah ha, of course nobackfill is the one. I am exposing my lack of >> experience in actually operating a cluster here :-) >> > > That said, it might make sense for Ceph to wait a few minutes before > starting to backfill after any osdmap changes. > The current behaviour can be a little spastic at times. Agreed. And the scenario I described above must be very common in operations, where I suspect more often that not people just make the change and hope all will be well. It's true crushtool can simulate mappings, but what I really want to see is the `ceph -s` after the crush change but before the cluster starts actually acting on it, that e.g. gives you a chance to see the amount of data that will move and see if the number of impacted PGs makes sense for the change. -- Cheers, ~Blairo -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html