Hi, > so I need to transfer the data from the failed OSD to the other OSDs that are healthy. It’s risky, but if you think the failing disk is healthy “enough”, you can try migrate the data off of it with "ceph osd out {osd-num}” and waiting for it to empty. I’m assuming you have enough spare capacity in the rest of your cluster to not hit "full ratio". You’ll probably also want to limit the number of concurrent recovery/backfill operations (see Google) so you don’t hammer your failing disk. As/when your failing disk fails, you’ll be left using dd_rescue / manually extracting/importing PGs as others have said. Stewart, -- Stewart Morgan MEng MIET Digital Systems Administrator Watershed _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx