On Wed, 15 Feb 2017, Wido den Hollander wrote: > Hi, > > Currently we can supply a OSD UUID to 'ceph-disk prepare', but we can't > provide a OSD ID. > > With BlueStore coming I think the use-case for this is becoming very > valid: > > 1. Stop OSD > 2. Zap disk > 3. Re-create OSD with same ID and UUID (with BlueStore) > 4. Start OSD > > This allows for a in-place update of the OSD without modifying the > CRUSHMap. For the cluster's point of view the OSD goes down and comes > back up empty. > > There were some drawbacks around this and some dangers, so before I > start working on a PR for this, any gotcaches which might be a problem? > > The idea is that users have a very simple way to re-format a OSD > in-place while keeping the same CRUSH location, ID and UUID. +1 However, I don't think we need to specify the osd id.. just the uuid. If you pass an existing uuid to 'osd create' it will give you back the existing osd id. Please test to confirm, but I *think* it is sufficient to just give ceph-disk prepare the old osd's uuid. Maybe the thing to do is create a streamlined command to do this: 'ceph-disk prepare --zap-and-reformat' or something that grabs the old uuid for you, does the zap, and then feeds it to prepare? sage -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html