Hello, hope you had a nice Xmas and I wish all of you a good and happy new year in advance... Yesterday my ceph nautilus 14.2.15 cluster had a disk with unreadable sectors, after several tries the OSD was marked down and rebalancing started and has also finished successfully. ceph osd stat shows the osd now as "autoout,exists". Usually the steps to replace a failed disk are: 1. Destroy the failed OSD: ceph osd destroy {id} 2. run ceph-volume lvm create --bluestore --osd-id {id} --data /dev/sdX ... with a new disk in place to recreate a OSD with the same id without the need to change the crushmap or auth info etc. Now I still wait for a new disk and I am a unsure if I should run the destroy-command already now to keep ceph from trying to reactivate the broken osd? Then I would wait until the disk has arrived in a day or so and then use ceph volume to create a new osd? Or should I leave the state as it is now until the disk has arrived and then run both steps (destroy, volume ceph-lvm-create) one right after the other? Do the two slightly different ways make any difference if for example a power failure would result in a reboot of the node with the failed OSD before I could replace the broken disk? Any comments on this? Thanks Rainer -- Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1 56070 Koblenz, Web: http://www.uni-koblenz.de/~krienke, Tel: +49261287 1312 PGP: http://www.uni-koblenz.de/~krienke/mypgp.html, Fax: +49261287 1001312 _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx