Re: RAID5 degraded, removed the wrong hard disk frm the tray

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Il 17/06/2018 20:30, Wols Lists ha scritto:

Anyways, first things first. I'm guessing /dev/sda failed. Do you have
another computer? Can you examine this disk there? Can you do a ddrescue
to copy the contents onto your new hard disk?

Hello, I'm here again
I have found a guide to ddrescue here:

https://www.technibble.com/guide-using-ddrescue-recover-data/

After reading avidly, I have connected the first damaged disk (/dev/sda) and the new disk buyed some week ago to another PC, and I have started the system with a live mint distro.
I have installed gddrescue, and executed the following commands:

mint ~ # lsblk -o name,label,size,fstype,model
NAME   LABEL                      SIZE FSTYPE            MODEL
sdd                                       3.8G DataTraveler SE9
└─sdd1 LINUX MINT             3.8G vfat
sdb                                       1.8T WDC WD20EARS-00M
├─sdb2                                 1.8T
├─sdb3 thecus:50                512M linux_raid_member
└─sdb1 thecus:10                2G linux_raid_member
fd0                                       4K
sde                                       7.5G Flash Disk
└─sde1 SILICON 8GB           7.5G ntfs
loop0                                    1.7G squashfs
sdc                                       1.8T ST2000DM006-2DM1
sda                                       372.6G Hitachi HDS72404
├─sda2                                 191.9G ntfs
├─sda5                                 176.7G ext4
├─sda3                                 1K
├─sda1 Riservato per il sistema   100M ntfs
└─sda6                                  4G swap

sdd is the USB with the live distro, and sda is the disk installed on the PC, with a linux and a windows system
sde is an USB for storing the ddrescue's logs
sdb is the damaged disk (with the raid's partitions)
sdc is the new disk

so I have executed this command:

mint ~ # ddrescue -d -f /dev/sdb /dev/sdc /media/usb/ddrescue.logfile
GNU ddrescue 1.19
Press Ctrl-C to interrupt
rescued:         0 B,  errsize:   2000 GB,  current rate:        0 B/s
   ipos:     2000 GB,   errors:       1,    average rate:        0 B/s
   opos:     2000 GB, run time:    4.27 h,  successful read: 4.27 h ago
Finished

the ddrescue's log is the following:

# Rescue Logfile. Created by GNU ddrescue version 1.19
# Command line: ddrescue -d -f /dev/sdb /dev/sdc /media/usb/ddrescue.logfile
# Start time:   2018-06-19 07:35:10
# Current time: 2018-06-19 11:51:41
# Finished
# current_pos  current_status
0x1D1C1115C00     +
#      pos        size  status
0x00000000  0x1D1C1116000  -

I think that the result is nothing good. furthermore, executing again lsblk command I have this:

mint ~ # lsblk -o name,label,size,fstype,model

NAME   LABEL                      SIZE FSTYPE   MODEL
sdd                               3.8G          DataTraveler SE9
└─sdd1 LINUX MINT                 3.8G vfat
fd0                                 4K
sde                               7.5G          Flash Disk
└─sde1 SILICON 8GB                7.5G ntfs
loop0                             1.7G squashfs
sdc                               1.8T          ST2000DM006-2DM1
sda                             372.6G          Hitachi HDS72404
├─sda2                          191.9G ntfs
├─sda5                          176.7G ext4
├─sda3                              1K
├─sda1 Riservato per il sistema   100M ntfs
└─sda6                              4G swap

the "raid" disk is disappeared!
so I have restarted the system (the raid disk was present again after a reboot), to try again a ddrescue with -r3 option

mint ~ # ddrescue -d -f -r3 /dev/sdb /dev/sdc /media/usb/ddrescue_new.logfile
GNU ddrescue 1.19
Press Ctrl-C to interrupt
rescued:         0 B,  errsize:   2000 GB,  current rate:        0 B/s
   ipos:     2000 GB,   errors:       1,    average rate:        0 B/s
   opos:     2000 GB, run time:   17.65 h,  successful read: 17.65 h ago
Finished

this is the new ddrescue's log file:

# Rescue Logfile. Created by GNU ddrescue version 1.19
# Command line: ddrescue -d -f -r3 /dev/sdb /dev/sdc /media/usb/ddrescue_new.logfile
# Start time:   2018-06-19 14:20:03
# Current time: 2018-06-20 07:59:26
# Finished
# current_pos  current_status
0x1D1C1115E00     +
#      pos        size  status
0x00000000  0x1D1C1116000  -

again, nothing good. The new disk is empty, and the "errsize: 2000 GB" let me think that the old disk is really all damaged.

The second damaged disk is still on the NAS, and I'm not touching nothing, obviously. Now I'm thinking to halt the NAS and try a ddrescue of the second damaged disk to the new. Is this make some sense or it is absolutely useless?

In the meantime, I'm waiting for a second new disk..

Again, thanks in advance for any really appreciated advice and help

Piero


--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux