Ceph uses the next free ID available, so IDs definitely will be reused
if you free them up at some point. I'm not sure why
'--all-available-devices' would suddenly choose a different ID than
the OSD had when you marked it as "destroyed". But I also don't use
that 'all-available-devices' flag in production clusters, only for
testing. I like to have control over OSD specs, and if I replace a HDD
with shared rocksDB, I don't want Ceph to deploy a standalone HDD OSD.
BTW, if you already applied 'ceph orch apply osd
--all-available-devices' once, you don't need to apply it again, that
spec is stored.
Do you have other OSD specs in place? What does 'ceph orch ls osd
--export' show?
Zitat von Nicola Mori <mori@xxxxxxxxxx>:
Thanks for your insight. So if I remove an OSD without --replace its
ID won't be reused when I e.g. add a new host with new disks? Even
if I completely remove it from the cluster? I'm asking because I
maintain a failure log per OSD and I'd like to avoid that an OSD
previously in host A migrates to host B at a certain point.
thanks again,
Nicola
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx