I replaced another disk, this time everything worked as expected
following this procedure:
1) Drain and destroy the OSD:
ceph orch osd rm <ID> --replace
2) Replace the disk.
3) Zap the new disk:
ceph orch device zap <host> /dev/sd<X> --force
4) Manually create the new OSD:
ceph orch daemon add osd <host>:/dev/sd<X>
5) Adjust the CRUSH weight for the new disk size:
ceph osd crush reweight osd.<ID> <weight>
At point 1 an OSD in destroyed state is left in the cluster; it is
automatically replaced by another OSD with the same ID and associated to
the new disk at point 4.
Thanks to everybody for the help and suggestions.
Nicola
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx