What release are you running where ceph-deploy still works? I get what you're saying, but really you should get used to OSD IDs being arbitrary. - ``ceph osd ls-tree <name>`` will output a list of OSD ids under the given CRUSH name (like a host or rack name). This is useful for applying changes to entire subtrees. For example, ``ceph osd down `ceph osd ls-tree rack1```. This is useful for one-off scripts, where you can e.g. use it to get a list of OSDs on a given host. Normally the OSD ID selected is the lowest-numbered unused one. Which can either be an ID that has never been used before, or one that has been deleted. So if you delete an OSD entirely and redeploy, you may or may not get the same ID depending on the cluster’s history. - ``ceph osd destroy`` will mark an OSD destroyed and remove its cephx and lockbox keys. However, the OSD id and CRUSH map entry will remain in place, allowing the id to be reused by a replacement device with minimal data rebalancing. Destroying OSDs and redeploying them can help with what you’re after. > On Oct 17, 2024, at 9:14 PM, Shain Miley <SMiley@xxxxxxx> wrote: > > Hello, > I am still using ceph-deploy to add osd’s to my cluster. From what I have read ceph-deploy does not allow you to specify the osd.id when creating new osds, however I am wondering if there is a way to influence the number that ceph will assign for the next osd that is created. > > I know that it really shouldn’t matter what osd number gets assigned to the disk but as the number of osd increases it is much easier to keep track of where things are if you can control the id when replacing failed disks or adding new nodes. > > Thank you, > Shain > > > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx