how do i bring this disk back into the fold?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, all --

I recently migrated our disk farm to a new server (see the next email),
and I see that one of the partitions in a RAID5 set it inactive:

  diskfarm:~ # cat /proc/mdstat 
  Personalities : [raid6] [raid5] [raid4] 
  md127 : inactive sdf2[1](S) sdl2[0](S) sdj2[3](S)
        2196934199 blocks super 1.2
         
  md0 : active raid5 sdc1[3] sdd1[4] sdb1[0]
        11720265216 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [U_UU]

  diskfarm:~ # mdadm --examine /dev/sd[bcde]1 | egrep '/dev|Name|Role|State|Checksum|Events|UUID'
  /dev/sdb1:
       Array UUID : ca7008ef:90693dae:6c231ad7:08b3f92d
             Name : diskfarm:0  (local to host diskfarm)
            State : clean
      Device UUID : bbcf5aff:e4a928b8:4fd788c2:c3f298da
         Checksum : 4aa669d5 - correct
           Events : 77944
     Device Role : Active device 0
     Array State : A.AA ('A' == active, '.' == missing, 'R' == replacing)
  /dev/sdc1:
       Array UUID : ca7008ef:90693dae:6c231ad7:08b3f92d
             Name : diskfarm:0  (local to host diskfarm)
            State : clean
      Device UUID : c0a32425:2d206e98:78f9c264:d39e9720
         Checksum : 38ee846d - correct
           Events : 77944
     Device Role : Active device 2
     Array State : A.AA ('A' == active, '.' == missing, 'R' == replacing)
  /dev/sdd1:
       Array UUID : ca7008ef:90693dae:6c231ad7:08b3f92d
             Name : diskfarm:0  (local to host diskfarm)
            State : clean
      Device UUID : f05a143b:50c9b024:36714b9a:44b6a159
         Checksum : 49b381d8 - correct
           Events : 77944
     Device Role : Active device 3
     Array State : A.AA ('A' == active, '.' == missing, 'R' == replacing)
  /dev/sde1:
       Array UUID : ca7008ef:90693dae:6c231ad7:08b3f92d
             Name : diskfarm:0  (local to host diskfarm)
            State : active
      Device UUID : 835389bd:c065c575:0b9f2357:9070a400
         Checksum : 80e1b8c4 - correct
           Events : 60360
     Device Role : Active device 1
     Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)

  diskfarm:~ # mdadm --detail /dev/md0
  /dev/md0:
             Version : 1.2
       Creation Time : Mon Feb  6 05:56:35 2017
          Raid Level : raid5
          Array Size : 11720265216 (10.92 TiB 12.00 TB)
       Used Dev Size : 3906755072 (3.64 TiB 4.00 TB)
        Raid Devices : 4
       Total Devices : 3
         Persistence : Superblock is persistent

         Update Time : Sun Mar 28 01:44:43 2021
               State : clean, degraded 
      Active Devices : 3
     Working Devices : 3
      Failed Devices : 0
       Spare Devices : 0

              Layout : left-symmetric
          Chunk Size : 512K

  Consistency Policy : resync

                Name : diskfarm:0  (local to host diskfarm)
                UUID : ca7008ef:90693dae:6c231ad7:08b3f92d
              Events : 77944

      Number   Major   Minor   RaidDevice State
         0       8       17        0      active sync   /dev/sdb1
         -       0        0        1      removed
         3       8       33        2      active sync   /dev/sdc1
         4       8       49        3      active sync   /dev/sdd1

Before I go too crazy ...  What do I need to do to bring sde1 back into
the RAID volume, either to catch up the missing 17k events (probably
preferred) or just to rebuild it?


TIA & HANN

:-D
-- 
David T-G
See http://justpickone.org/davidtg/email/
See http://justpickone.org/davidtg/tofu.txt




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux