Hi John, ceph osd set nobackfill/norecover/norebalance ? It's not something you want to accidentally leave set, but is use nonetheless - I'm using it right at this moment to load an edited crushmap and examine the PG remapping impact before actually pulling the trigger and letting things sort themselves out (if I decide not to I can always re-inject the previous/current crushmap). Cheers, On 8 February 2017 at 23:26, John Spray <jspray@xxxxxxxxxx> wrote: > So I've just finished upgrading my home cluster OSDs to bluestore by > killing them one by one and then letting backfill happen to "new" OSDs > on the same drives. Hooray! > > One slightly awkward thing I ran into was that even though I had noout > set throughout, during the period between removing the old OSD and > adding the "new" one, some PGs would of course get remapped (and start > generating backfill IO to third party OSDs). This does make sense > when you think about it (noout doesn't make the cluster magically > remember OSDs that have been removed), but is still an undesirable > behaviour. > > A) Do we currently have a mechanism to tell the cluster "even though I > removed this OSD, don't go moving PGs around just yet"? Should we add > one? > B) Was there a way for me to avoid this by e.g. skipping the "osd rm > X" and "osd crush rm osd.X" that I'm currently doing before adding the > new OSD that will take the old OSD's ID? > > John > -- > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html -- Cheers, ~Blairo -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html