Fwd: I will pay money for the correct RAID recovery instructions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sorry to state the obvious but......

To restore the degraded array would (based on the info you've posted)
likely take longer then temporarily moving the data to a different set
of drives.

As time seems to be a major consideration here (likely/possible
failure of sde) then surely the optimal strategy has to be to get the
data off first, then look at the rebuilding the degraded array?

Just my 2c

On 17 October 2014 15:05, John Stoffel <john@xxxxxxxxxxx> wrote:
>
>
> Ian,
>
> It would also help if you posted the details of your setup using:
>
> cat /proc/partitions
> cat /proc/mdstat
>
> mdadm -D /dev/md#
>  - for each of the devices above.
>
> mdadm -E /dev/sd<drive><#>
>  - for each disk or partition in the array from above.
>
>
>
> But the suggestions to ddrescue the going bad drive onto a new disk is
> a good one.  On my debian system, I would do the following:
>
>   sudo apt-get install gddrescue
>   ddrescue /dev/sde /dev/sdf /var/tmp/ddrecue-sde.log
>
> and see how that goes.
>
> Good luck,
> John
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux