Good day I have a ceph osd node with an OS drive that has errors and may soon fail. There are 8 x 18TB drives installed in this node. The journal's for each drive are co-located on each drive. I'd like to replace the falling OS drive, re-install the OS (same node name and IP addressing), push the admin keys and conf to the node again and re-activate the eight storage drives. Is this possible without affecting the crushmap and data distribution? In the past I would have set the weight of each drive to 0 and then wait for the data to backfill elsewhere, then purge the drives and node from the cluster. Then start over, installing the node and adding it to the correct crush bucket, etc. This feels like an unnecessary course of action when all I need to do is replace the OS drive. OS: Ubuntu 18.04.6 LTS Ceph version: 15.2.17 - Octopus Kind regards Geoffrey Rhodes _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx