Re: RAID5 degraded, removed the wrong hard disk frm the tray

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 17/06/18 13:56, Piero wrote:
> Hello all,
> first time that I write in this mailing list, I hope is the right place.
> I need some help, and someone in the linux italian google group advices
> me to write here, so excuse me in advance if this is not the right place.
> 
> I have some skill in linux, but surely I'm not a "raid ninja".
> 
> I have a old (very old) Thecus 3200PRO NAS, on wich I have installed, in
> the far 2011, Ubuntu 10.04 server. The NAS have three 2TB HD configured
> in RAID-5, mounted in /dev/md0.
> Two or three weeks ago, I heard a strange noise from one of the hard disks.
> 
> Checking the "situation" with mdadm -D /dev/md0 tells me that the RAID
> is degraded, and one of the disks has been excluded from the RAID.
> 
> Unfortunately, I have removed the wrong HD from the tray (damned
> hurry!). Then I have re-inserted the good disk and removed the faulty
> one, but now the RAID no longer starts
> fdisk -l says the following (/dev/sdb is the drive removed by error, I
> think):
> 
So long as you didn't try to force a start, it shouldn't have messed
things up. That said, I am puzzled by some of your results ...

> Disco /dev/sdb: 2000.4 GB, 2000398934016 byte
> 255 testine, 63 settori/tracce, 243201 cilindri
> Unità = cilindri di 16065 * 512 = 8225280 byte
> Sector size (logical/physical): 512 bytes / 4096 bytes
> I/O size (minimum/optimal): 4096 bytes / 4096 bytes
> Identificativo disco: 0x0b4f7f28
> 
> Dispositivo Boot      Start         End      Blocks   Id  System
> /dev/sdb1               1      243201  1953512001   83  Linux
> Partition 1 does not start on physical sector boundary.
> 
> and this for the good one:
> 
> WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util
> fdisk doesn't support GPT. Use GNU Parted.

Okay. Boot using a modern rescue disk. fdisk should support GPT. And if
you have any trouble you REALLY want to be using the absolute latest mdadm.
> 
> Disco /dev/sdc: 2000.4 GB, 2000398934016 byte
> 256 testine, 63 settori/tracce, 242251 cilindri
> Unità = cilindri di 16128 * 512 = 8257536 byte
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Identificativo disco: 0x00000000
> 
> Dispositivo Boot      Start         End      Blocks   Id  System
> /dev/sdc1               1      242252  1953514583+  ee  GPT
> 
> 
> I have buyed a new 2TB Hard Disk, but I haven't touched nothing since I
> don't know what to do if not more damages.

Brilliant!!! The number of people who mess things up by trying when they
shouldn't ...

You might want to buy a second spare 2TB disk (or not), but we'll see.
> 
> As I have seen on "asking for help" page of the wiki, these are the
> outputs that can help someone to help me:
> 
> root@thecus:~# mdadm --examine /dev/sdb1
> /dev/sdb1:
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x0
>      Array UUID : 3115aed7:e283c89f:4407bf04:b1c52771
>            Name : thecus:0  (local to host thecus)
>   Creation Time : Sun Nov  6 12:27:15 2011
>      Raid Level : raid5
>    Raid Devices : 3
> 
>  Avail Dev Size : 3907023730 (1863.01 GiB 2000.40 GB)
>      Array Size : 7803486208 (3720.99 GiB 3995.38 GB)
>   Used Dev Size : 3901743104 (1860.50 GiB 1997.69 GB)
>     Data Offset : 272 sectors
>    Super Offset : 8 sectors
>           State : clean
>     Device UUID : 97707fac:738775ea:c9bc41e2:43f46b2c
> 
>     Update Time : Mon May 28 16:39:36 2018
>        Checksum : 4a45df11 - correct
>          Events : 44021

Okay ...
> 
>          Layout : left-symmetric
>      Chunk Size : 4096K
> 
>     Array Slot : 3 (failed, 1, failed, 2, failed, failed, failed,
> failed, failed, failed, failed, failed, failed, failed, failed, failed,

<snip>

>    Array State : _uU 382 failed
> 
> root@thecus:~# mdadm --examine /dev/sdc1
> /dev/sdc1:
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x0
>      Array UUID : 3f7c6079:b69c27eb:8e4344e4:a3403434
>            Name : thecus:10  (local to host thecus)
>   Creation Time : Sun Nov  6 12:27:14 2011
>      Raid Level : raid1
>    Raid Devices : 3
> 
>  Avail Dev Size : 4192256 (2047.34 MiB 2146.44 MB)
>      Array Size : 4192232 (2047.33 MiB 2146.42 MB)
>   Used Dev Size : 4192232 (2047.33 MiB 2146.42 MB)
>     Data Offset : 2048 sectors
>    Super Offset : 8 sectors
>           State : clean
>     Device UUID : 19dbe832:b6c96752:770ba051:d7569302
> 
>     Update Time : Sun Jun 19 19:17:54 2016
>        Checksum : 3a2e0b5a - correct
>          Events : 110
> 
This worries me ... this is WILDLY different from the other disk ...
> 
>     Array Slot : 1 (0, 1, 2, failed, failed, failed, failed, failed,
> failed, failed, failed, failed, failed, failed, failed, failed, failed,

<snip>

> failed, failed, failed, failed, failed, failed, failed)
>    Array State : uUu 381 failed
> 
> and similar for the /dev/sdc(2 and 3) but not for /dev/sdb(2 and 3)
> 
> root@thecus:~# mdadm --detail /dev/md0
> mdadm: md device /dev/md0 does not appear to be active.
> 
> 
> root@thecus:~# cat /proc/mdstat
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
> [raid4] [raid10]
> md10 : inactive sdc1[1](S)
>       2096128 blocks super 1.2
> 
> md50 : inactive sdc3[1](S)
>       524276 blocks super 1.2
> 
> md0 : inactive sdc2[1](S) sdb1[3](S)
>       3904385465 blocks super 1.2
> 
> unused devices: <none>
> 
> 
> hoping to have given all the necessary information, you think can I
> recover the RAID and data stored in?
> 
I'm worried ...

Anyways, first things first. I'm guessing /dev/sda failed. Do you have
another computer? Can you examine this disk there? Can you do a ddrescue
to copy the contents onto your new hard disk?

> How can I do?
> 
> Sorry for the length of the message (and obviously for my english too),
> and really really thanks in advance for your help
> 
Dunno whether anybody can provide any other help but ... I'm worried
that sdc has been trashed. That means that sdb is your only good disk.
It might well be a good idea to get another 2TB drive and ddrescue that
as well.

If you can ddrescue sda, let us know what --examine says. Hopefully it's
reasonably intact, and we can rebuild the array from the two copied disks.

Cheers,
Wol

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux