Re: RAID 10 with 2 failed drives

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 19/09/19 21:45, Liviu Petcu wrote:
> Hello,
> 
> Please let me know if in this situation detailed below, there are chances of restoring the RAID 10 array and how I can do it safely. 
> Thank you!

This is linux raid 10, not some form of raid 1+0? That's what it looks
like to me. I notice it says the array is active! That I think is good news!

Can you mount it read-only and read it? I would be surprised if you
can't, which means the array is running fine in degraded mode. NOT GOOD
but not a problem provided nothing further goes wrong. I notice it's
also version 0.9 - is it an old array? Have the drives themselves
failed? (which I guess is probably the case :-( I guess the drives
effectively have just the one partition - 2 - and 1 is something
unimportant?

Okay, my take on the situation is that you have two failed drives. The
array is okay but degraded, which is a dangerous position to be in. You
need to replace those failed drives asap. BUT. The array is old, which
means a recovery could tip the remaining drives over the edge.

Can you get a SMART report off the drives? If the other drives look
healthy we can risk a rebuild, if they don't we need to shut the array
down and copy them pronto. Either way you need new drives, which is a
good time to think about your next move.

You've got raid10 - do you want to get some larger drives and go raid-6?
Do you want to increase your disk capacity? etc etc.

Then we can think about either just replacing the failed drives, or
going the whole hog and moving on to a new array.

Cheers,
Wol
> 
> Liviu Petcu
> 
> # mdadm --examine /dev/sd[abcdef]2
> 
> /dev/sda2:
>           Magic : a92b4efc
>         Version : 0.90.00
>            UUID : df4ee56a:547f33ee:32bb33b1:ae294b6e
>   Creation Time : Fri Aug 14 12:11:48 2015
>      Raid Level : raid10
>   Used Dev Size : 1945124864 (1855.02 GiB 1991.81 GB)
>      Array Size : 5835374592 (5565.05 GiB 5975.42 GB)
>    Raid Devices : 6
>   Total Devices : 6
> Preferred Minor : 1
> 
>     Update Time : Thu Sep 19 21:05:15 2019
>           State : active
>  Active Devices : 4
> Working Devices : 4
>  Failed Devices : 2
>   Spare Devices : 0
>        Checksum : e528c455 - correct
>          Events : 271498
> 
>          Layout : offset=2
>      Chunk Size : 256K
> 
>       Number   Major   Minor   RaidDevice State
> this     5       8        2        5      active sync   /dev/sda2
> 
>    0     0       8       18        0      active sync   /dev/sdb2
>    1     1       0        0        1      faulty removed
>    2     2       0        0        2      faulty removed
>    3     3       8       66        3      active sync   /dev/sde2
>    4     4       8       82        4      active sync   /dev/sdf2
>    5     5       8        2        5      active sync   /dev/sda2
> /dev/sdb2:
>           Magic : a92b4efc
>         Version : 0.90.00
>            UUID : df4ee56a:547f33ee:32bb33b1:ae294b6e
>   Creation Time : Fri Aug 14 12:11:48 2015
>      Raid Level : raid10
>   Used Dev Size : 1945124864 (1855.02 GiB 1991.81 GB)
>      Array Size : 5835374592 (5565.05 GiB 5975.42 GB)
>    Raid Devices : 6
>   Total Devices : 6
> Preferred Minor : 1
> 
>     Update Time : Thu Sep 19 21:05:15 2019
>           State : active
>  Active Devices : 4and
> Working Devices : 4
>  Failed Devices : 2
>   Spare Devices : 0
>        Checksum : e528c45b - correct
>          Events : 271498
> 
>          Layout : offset=2
>      Chunk Size : 256K
> 
>       Number   Major   Minor   RaidDevice State
> this     0       8       18        0      active sync   /dev/sdb2
> 
>    0     0       8       18        0      active sync   /dev/sdb2
>    1     1       0        0        1      faulty removed
>    2     2       0        0        2      faulty removed
>    3     3       8       66        3      active sync   /dev/sde2
>    4     4       8       82        4      active sync   /dev/sdf2
>    5     5       8        2        5      active sync   /dev/sda2
> /dev/sde2:
>           Magic : a92b4efc
>         Version : 0.90.00
>            UUID : df4ee56a:547f33ee:32bb33b1:ae294b6e
>   Creation Time : Fri Aug 14 12:11:48 2015
>      Raid Level : raid10
>   Used Dev Size : 1945124864 (1855.02 GiB 1991.81 GB)
>      Array Size : 5835374592 (5565.05 GiB 5975.42 GB)
>    Raid Devices : 6
>   Total Devices : 6
> Preferred Minor : 1
> 
>     Update Time : Thu Sep 19 21:05:16 2019
>           State : clean
>  Active Devices : 4
> Working Devices : 4
>  Failed Devices : 2
>   Spare Devices : 0
>        Checksum : e52ce91f - correct
>          Events : 271499
> 
>          Layout : offset=2
>      Chunk Size : 256K
> 
>       Number   Major   Minor   RaidDevice State
> this     3       8       66        3      active sync   /dev/sde2
> 
>    0     0       8       18        0      active sync   /dev/sdb2
>    1     1       0        0        1      faulty removed
>    2     2       0        0        2      faulty removed
>    3     3       8       66        3      active sync   /dev/sde2
>    4     4       8       82        4      active sync   /dev/sdf2
>    5     5       8        2        5      active sync   /dev/sda2
> /dev/sdf2:
>           Magic : a92b4efc
>         Version : 0.90.00
>            UUID : df4ee56a:547f33ee:32bb33b1:ae294b6e
>   Creation Time : Fri Aug 14 12:11:48 2015
>      Raid Level : raid10
>   Used Dev Size : 1945124864 (1855.02 GiB 1991.81 GB)
>      Array Size : 5835374592 (5565.05 GiB 5975.42 GB)
>    Raid Devices : 6
>   Total Devices : 6
> Preferred Minor : 1
> 
>     Update Time : Thu Sep 19 21:05:16 2019
>           State : clean
>  Active Devices : 4
> Working Devices : 4
>  Failed Devices : 2
>   Spare Devices : 0
>        Checksum : e52ce931 - correct
>          Events : 271499
> 
>          Layout : offset=2
>      Chunk Size : 256K
> 
>       Number   Major   Minor   RaidDevice State
> this     4       8       82        4      active sync   /dev/sdf2
> 
>    0     0       8       18        0      active sync   /dev/sdb2
>    1     1       0        0        1      faulty removed
>    2     2       0        0        2      faulty removed
>    3     3       8       66        3      active sync   /dev/sde2
>    4     4       8       82        4      active sync   /dev/sdf2
>    5     5       8        2        5      active sync   /dev/sda2
> 




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux