hotunpluged a disk, raid5 disappeared after reboot

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello, every one:
I had a 4*2T (sdh, sdi, sdj,sdk)raid5 with 128K chuck size and 2048
strip_cache_size. The mdadm 3.2.2, kernel 2.6.38 were used. I wrote a
program to test the write performance for the raid5. While the program
was writing data to the raid5, I unplugged sdk, then I rebooted the
machine. After that, I used “mdadm –assemble --scan” to scan the
array. The output was “not enough to start the array while not clean -
consider --force”. The array was seemed disappeared. Then I used
“mdadm –E /dev/sd[ijh]” to check the superblocks of sdi,sdj and sdh,
the outputs were the following:

# mdadm -E /dev/sd[ijh]
/dev/sdh:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : c2e5a470:4a7e42c3:96273d8c:1c8ad2b7
           Name : localhost:RAID5  (local to host localhost)
  Creation Time : Mon Sep 10 20:17:15 2012
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 310344694 (147.98 GiB 158.90 GB)
     Array Size : 931031040 (443.95 GiB 476.69 GB)
  Used Dev Size : 310343680 (147.98 GiB 158.90 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : active
    Device UUID : 2d86ad25:16130eaf:6da00473:60ba7143

    Update Time : Tue Sep 11 12:18:55 2012
       Checksum : 878b469a - correct
         Events : 108

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AAA. ('A' == active, '.' == missing)
/dev/sdi:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : c2e5a470:4a7e42c3:96273d8c:1c8ad2b7
           Name : localhost:RAID5  (local to host localhost)
  Creation Time : Mon Sep 10 20:17:15 2012
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 310344844 (147.98 GiB 158.90 GB)
     Array Size : 931031040 (443.95 GiB 476.69 GB)
  Used Dev Size : 310343680 (147.98 GiB 158.90 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : active
    Device UUID : 7f74fb29:1b2c5c03:1e2b8de1:e25c9e39

    Update Time : Tue Sep 11 12:18:55 2012
       Checksum : 44dc7bbb - correct
         Events : 108

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AAA. ('A' == active, '.' == missing)
/dev/sdj:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : c2e5a470:4a7e42c3:96273d8c:1c8ad2b7
           Name : localhost:RAID5  (local to host localhost)
  Creation Time : Mon Sep 10 20:17:15 2012
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 310344346 (147.98 GiB 158.90 GB)
     Array Size : 931031040 (443.95 GiB 476.69 GB)
  Used Dev Size : 310343680 (147.98 GiB 158.90 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : active
    Device UUID : 5c58dcec:d03b6bd0:738bc5ea:e0c75d33

    Update Time : Tue Sep 11 12:18:55 2012
       Checksum : d7c438b0 - correct
         Events : 108

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 2
   Array State : AAA. ('A' == active, '.' == missing)

As the output of “mdadm --assemble --scan ” suggested, I used “mdadm
–assemble –scan --force” to scan the array again. It did work. Again I
used “mdadm –E /dev/sd[ijh]” to see the superblocks of sdi,sdj and
sdh, the outputs were the following:

# mdadm -E /dev/sd[ijh]
/dev/sdh:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : c2e5a470:4a7e42c3:96273d8c:1c8ad2b7
           Name : localhost:RAID5  (local to host localhost)
  Creation Time : Mon Sep 10 20:17:15 2012
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 310344694 (147.98 GiB 158.90 GB)
     Array Size : 931031040 (443.95 GiB 476.69 GB)
  Used Dev Size : 310343680 (147.98 GiB 158.90 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 2d86ad25:16130eaf:6da00473:60ba7143

    Update Time : Tue Sep 11 12:18:55 2012
       Checksum : 878b469a - correct
         Events : 108

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AAA. ('A' == active, '.' == missing)
/dev/sdi:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : c2e5a470:4a7e42c3:96273d8c:1c8ad2b7
           Name : localhost:RAID5  (local to host localhost)
  Creation Time : Mon Sep 10 20:17:15 2012
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 310344844 (147.98 GiB 158.90 GB)
     Array Size : 931031040 (443.95 GiB 476.69 GB)
  Used Dev Size : 310343680 (147.98 GiB 158.90 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : active
    Device UUID : 7f74fb29:1b2c5c03:1e2b8de1:e25c9e39

    Update Time : Tue Sep 11 12:18:55 2012
       Checksum : 44dc7bbb - correct
         Events : 108

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AAA. ('A' == active, '.' == missing)
/dev/sdj:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : c2e5a470:4a7e42c3:96273d8c:1c8ad2b7
           Name : localhost:RAID5  (local to host localhost)
  Creation Time : Mon Sep 10 20:17:15 2012
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 310344346 (147.98 GiB 158.90 GB)
     Array Size : 931031040 (443.95 GiB 476.69 GB)
  Used Dev Size : 310343680 (147.98 GiB 158.90 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : active
    Device UUID : 5c58dcec:d03b6bd0:738bc5ea:e0c75d33

    Update Time : Tue Sep 11 12:18:55 2012
       Checksum : d7c438b0 - correct
         Events : 108

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 2
   Array State : AAA. ('A' == active, '.' == missing)

There was basically nothing inconsistent between the two outputs,
except that the state of “sdh” in the former output was “active”,
while in the latter was “clean”.
What was the problem? Can any one help me?
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux