Re: Recovering from a Failed Disk (replication 1)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You probably need to attempt a physical data rescue. Data access will be lost until done.

First thing is shut down the OSD to avoid any further damage to the disk.
Second thing is to try ddrescue, repair data on a copy if possible and then create a clone on a new disk from the copy.
If this doesn't help and you really need that last bit of data, you might need support from one of those companies that restore disk data with electron microscopy.

I successfully transferred OSDs between disks using ddrescue.

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: vladimir franciz blando <vladimir.blando@xxxxxxxxx>
Sent: 17 October 2019 05:29:13
To: ceph-users@xxxxxxx
Subject:  Recovering from a Failed Disk (replication 1)

Hi,

I have a not ideal setup on one of my cluster,  3 ceph  nodes but using replication 1 on all pools (don't ask me why replication 1, it's a long story).

So it has come to this situation that a disk keeps on crashing, possible a hardware failure and I need to recover from that.

What's my best option for me to recover the data from the failed disk and transfer it to the other healthy disks?

This cluster is using Firefly

- Vlad
[https://mailfoogae.appspot.com/t?sender=admxhZGltaXIuYmxhbmRvQGdtYWlsLmNvbQ%3D%3D&type=zerocontent&guid=976ce724-3894-4a75-b591-dca017bdf19e]ᐧ;
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux