Re: Raid recovery

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi François,
The 12TB just arrived today, I want to ask you what you suggest me to
do, to format ext4 or other format and to keep it in only one big
partition or should I divide in 4 x 3TB partitions?
Thanks
Madalin
On Thu, Sep 27, 2018 at 4:20 PM François Goudal <francois@xxxxxxxxxx> wrote:
>
> Hi Madalin,
>
> You just can't "backup" /dev/md127 because the volume is inactive. You'll have to backup all 4 partitions instead.
>
> Also, I would recommend that you use ddrescue instead of dd. In case of bad sectors on one of your drives, you'll have a better chance to complete the dump (also you can stop the dump and restart it without having to start over from scratch, which, given the time it will take, can be a good thing)
>
> So, what you need to do is:
>
> ddrescue -d -r3 /dev/sdb3 /path/to/large/disk/mount/point/sdb3.img /path/to/large/disk/mount/point/sdb3.mapfile
>
> ddrescue -d -r3 /dev/sdc3 /path/to/large/disk/mount/point/sdc3.img /path/to/large/disk/mount/point/sdc3.mapfile
>
> ddrescue -d -r3 /dev/sdd3 /path/to/large/disk/mount/point/sdd3.img /path/to/large/disk/mount/point/sdd3.mapfile
>
> ddrescue -d -r3 /dev/sde3 /path/to/large/disk/mount/point/sde3.img /path/to/large/disk/mount/point/sde3.mapfile
>
>
> This is going to take time. You can interrupt anytime those commands and restart them later, the mapfile is used to keep track of what has been already dumped and what not, so when you restart, it will skip what's already copied.
>
> Once you have those copies, then I'd recommend that you reboot your computer without the original drives attached (first, to make it sure that you don't touch them, and second because you'll have two volumes with the same volume ID, which it won't like)
>
> Once the system is booted again, you need to create loopback devices on those 4 files:
>
> losetup -f /path/to/large/disk/mount/point/sdb3.img
> losetup -f /path/to/large/disk/mount/point/sdc3.img
> losetup -f /path/to/large/disk/mount/point/sdd3.img
> losetup -f /path/to/large/disk/mount/point/sde3.img
>
> After doing this, you then may need to run the following:
>
> mdadm --assemble --scan
>
> After this, you should then see your volume in /proc/mdstat, still in the bad state, and with /dev/loopX instead of /dev/sdX3 devices in it.
>
> So, then I think you should simply try to run:
>
> mdadm --assemble --force /dev/mdX
> (replace mdX with the actual volume name from /proc/mdstat)
>
> Once done, check /proc/mdstat again to see if your volume is back in the active state (hopefully it will be).
> If so, then you should be able to try mounting the /dev/mdX on some directory and access your data.
>
> Please report back what happens at every step above in case it doesn't work.
>
>
>
> Le 27/09/2018 à 10:33, Madalin Grosu a écrit :
>
> Hi François,
>
> I talk to Martin Krafft (I found his signature in some script on NAS system files) one week ago and he suggested to make a backup like you said.
> Maybe tomorrow the 12 TB drive will arrive, in the worst case next week.
> What I want to ask you how should look like dd command to set also the destination drive for the saved image.
> What do you think, it will be better to image all the partitions from my raid drives or only the data one /dev/md127?
> Thanks and I will keep you updated.
>
>
>
> În mar., 25 sept. 2018 la 16:50 François Goudal <francois@xxxxxxxxxx> a scris:
>>
>> Hi Madalin,
>>
>> Le 20/09/2018 à 22:42, Madalin Grosu a écrit :
>> >  From what I assume looking on the disk app sd [bcde]1 is file system,
>> > sd [bcde]2 is swap and  sd [bcde]3 is data
>> > [...]
>>
>> Sorry, I have been taking time to respond. The reason being that I think
>> that a simple forced reassemble of that volume should do it, given what
>> I see... but I'm not confident enough to just tell you to do it. If I've
>> missed something, and you do it, and it makes the situation worse, I
>> would be embarassed.
>>
>> So, let me ask you if, by any chance, you have the possibility to have
>> 12GB of free space on this Linux machine ? (maybe borrow a big disk from
>> someone else, or buy one and sell it back shortly after, once you have
>> completed your recovery)
>>
>> If that was the case, you could then image all 4 disks and work on an
>> attempt to reassemble on the images. If things go wrong, you still have
>> your original disks left intact.
>>
>> This is the strategy that I used a few weeks ago to deal with my own
>> RAID5 volume and I'm glad I did it because my first attempts at
>> recovering the disk were not right, and I would probably have lost all
>> my chances to recover my data if I had been working on the actual disks
>> themselves.
>>
> --
> null
>
>




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux