Hi everyone, in the case where I’ve lost the entire directory below that contains a bluestore OSD’s config and metadata, but all the bluestore devices are intact (block, block.db, block.wal), how can I get the OSD up and running again? I tried to do a ceph-osd –mkfs again, which seemed to regenerate everything OK and got the OSD back to up/in, but all the placement groups assigned to the OSD are stuck stale. Using the admin socket on the OSD to ask it to trigger a scrub on a particular PG gives a result of “Can't find pg <pg_id>”. It seems the OSD has no knowledge of the PGs that were assigned to it before. I assume this is because the mkfs operation cleared out state from the block/db devices. Is there any feasible approach to bring an OSD that’s lost its config back to life in the future? Thanks! osd0 # ls -l total 112 lrwxrwxrwx. 1 root root 58 Sep 22 22:26 block -> /dev/disk/by-partuuid/e0b7583c-aa1a-49b9-906b-3580f9f92b9a lrwxrwxrwx. 1 root root 58 Sep 22 22:26 block.db -> /dev/disk/by-partuuid/e68da1b1-b13c-4ca7-8055-884b0cf32a38 lrwxrwxrwx. 1 root root 58 Sep 22 22:26 block.wal -> /dev/disk/by-partuuid/5d0589b7-e149-4a4f-9dd6-a5444ef25c72 -rw-r--r--. 1 root root 2 Sep 22 22:26 bluefs -rw-r--r--. 1 root root 37 Sep 22 22:26 ceph_fsid -rw-r--r--. 1 root root 37 Sep 22 22:26 fsid -rw-r--r--. 1 root root 56 Sep 22 22:26 keyring -rw-r--r--. 1 root root 8 Sep 22 22:26 kv_backend -rw-r--r--. 1 root root 21 Sep 22 22:26 magic -rw-r--r--. 1 root root 4 Sep 22 22:26 mkfs_done -rw-r--r--. 1 root root 6 Sep 22 22:26 ready srwxr-xr-x. 1 root root 0 Sep 22 22:26 ceph-osd.0.asok -rw-r--r--. 1 root root 2221 Sep 22 22:26 ceph.config -rw-r--r--. 1 root root 10 Sep 22 22:26 type -rw-r--r--. 1 root root 2 Sep 22 22:26 whoami _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com