mdadm: making a spare actie

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I have had a corruption of some sort on my raid5 setup,
and have not quite been able to solve it.

A previously failed disk partition is now a spare,
but I cannot get it made active again!

My array is /dev/md0
The components of this are /dev/sda5, sdb5, sdc5, sdd5

/dev/sdb5 was failed, and is now spare.
(NB: something got corrupted on that parition I think,
other partitions on that disk function ok)

Here is some output...

nas:~ # mdadm -E /dev/sdb5 (the "spare")
--------------------------
/dev/sdb5:
          Magic : a92b4efc
        Version : 00.90.03
           UUID : b54e46e1:b6a6e6ea:3ae5a5a5:04e207e4
  Creation Time : Fri Aug  4 22:42:14 2006
     Raid Level : raid5
  Used Dev Size : 244380672 (233.06 GiB 250.25 GB)
     Array Size : 733142016 (699.18 GiB 750.74 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0

    Update Time : Wed Jun 18 20:58:38 2008
          State : clean
 Active Devices : 3
Working Devices : 4
 Failed Devices : 1
  Spare Devices : 1
       Checksum : f11b2114 - correct
         Events : 0.3796128

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     4       8       21        4      spare   /dev/sdb5

   0     0       8        5        0      active sync   /dev/sda5
   1     1       0        0        1      faulty removed
   2     2       8       37        2      active sync   /dev/sdc5
   3     3       8       53        3      active sync   /dev/sdd5
   4     4       8       21        4      spare   /dev/sdb5

nas:~ # mdadm -E /dev/sda5 (an "ok" partition)
--------------------------
/dev/sda5:
          Magic : a92b4efc
        Version : 00.90.03
           UUID : b54e46e1:b6a6e6ea:3ae5a5a5:04e207e4
  Creation Time : Fri Aug  4 22:42:14 2006
     Raid Level : raid5
  Used Dev Size : 244380672 (233.06 GiB 250.25 GB)
     Array Size : 733142016 (699.18 GiB 750.74 GB)
   Raid Devices : 4
  Total Devices : 3
Preferred Minor : 0

    Update Time : Thu Jun 19 09:51:45 2008
          State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 1
  Spare Devices : 0
       Checksum : f11bd635 - correct
         Events : 0.3796148

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     0       8        5        0      active sync   /dev/sda5

   0     0       8        5        0      active sync   /dev/sda5
   1     1       0        0        1      faulty removed
   2     2       8       37        2      active sync   /dev/sdc5
   3     3       8       53        3      active sync   /dev/sdd5

------------------------------
nas:~ # mdadm -A /dev/md0 -U summaries /dev/sda5 /dev/sdb5 /dev/sdc5 /dev/sdd5
mdadm: /dev/md0 has been started with 3 drives (out of 4) and 1 spare.

------------------------------
nas:~ # mdadm --grow --raid-devices=4 /dev/md0
mdadm: /dev/md0: Cannot reshape array without increasing size (yet).

------------------------------
I have also done
mdadm /dev/md0 -a /dev/sdb5
and this results in a recovery...

nas:~ # cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdb5[4] sda5[0] sdd5[3] sdc5[2]
      733142016 blocks level 5, 64k chunk, algorithm 2 [4/3] [U_UU]
      [=>...................]  recovery =  7.3% (17900780/244380672) finish=174.1min speed=21666K/sec

unused devices: <none>

Which I've been through before, but still ends up as a spare.

Any ideas?

Thanks!!

Jon B

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux