raid10 array tend to two degraded raid10 array

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, everyone:
          I am Vincent, I am writing to you to ask a question about of
mdadm.
          I created a raid10 array with 4 160G disks used the command: mdadm
-Cv /dev/md0 -l10 -n4 /dev/sd[abcd],
          The version of my mdadm is 3.2.2, and the version of my kernel is
2.6.38
          when the raid10 is in resyncing, I used the following command to
make file system for it: mkfs.ext3 /dev/md0
          every was OK. The array continued to resync, but when the process
of resyncing is 3.4%, there were a lot of
          IO error of "sda" and "sdc". There were bad blocks in sda and sdc.
          Then I used "cat /proc/mdstat" to see the status of /dev/md0:
 
          Personalities : [raid10]      
          md0 : active raid10 sdb[1] sdd[3]
          310343680 blocks super 1.2 512K chunks 2 near-copies [4/2] [_U_U]
 
          unused devices: <none>
         
          /dev/sdc and /dev/sda had lost. 
          Then I reboot the system, but when i used "cat /proc/mdstat" to
see the status of /dev/md0:
 
          Personalities : [raid10] 
          md126 : active raid10 sda[0] sdc[2]
          310343680 blocks super 1.2 512K chunks 2 near-copies [4/2] [U_U_]
      
          md0 : active raid10 sdb[1] sdd[3]
          310343680 blocks super 1.2 512K chunks 2 near-copies [4/2] [_U_U]
      
          unused devices: <none>
           
          there had a array which name was md126, and consisted by /dev/sdc
/dev/sda.
          I used "mdadm --assemble --scan" to assemble the md devices. the
output of the command is:
                  
          dm: /dev/md/0 exists - ignoring
          md: md0 stopped.
          mdadm: ignoring /dev/sda as it reports /dev/sdd as failed
          mdadm: ignoring /dev/sdc as it reports /dev/sdd as failed
          md: bind<sdd>
          md: bind<sdb>
          md/raid10:md0: active with 2 out of 4 devices
          md0: detected capacity change from 0 to 317791928320
          mdadm: /dev/md0 has been started with 2 drives (out of 4).
          md0: unknown partition table
          mdadm: /dev/md/0 exists - ignoring
          md: md126 stopped.
          md: bind<sdc>
          md: bind<sda>
          md/raid10:md126: active with 2 out of 4 devices
          md126: detected capacity change from 0 to 317791928320
          mdadm: /dev/md126 has been started with 2 drives (out of 4).
          md126: unknown partition table
 
          And then I used "mdadm -E /dev/sda", "mdadm -E /dev/sdb", "mdadm
-E /dev/sdc", "mdadm -E /dev/sdc" , 
          "mdadm -D /dev/md0" and "mdadm -D /dev/md127" to  check the
details info of sda, sdb, sdc and sdd. 
          I found the property of "Array UUID" of all of these devices(sda,
sdb, sdc, sdd)were the same. But the 
          property of "Events" and "Update Time" of "sda" and "sdc" were the
same(21,  Fri Jul 6 11:02:09 2012),  
          the property of "Events" and "Update Time" of "sdb" and "sdd" were
the same(35,  Fri Jul 6 11:06:21 2012).
 
          Although the "Update Time" and "events" property of "sda" and
"sdc" were not equal to "sdb" and "sdd", 
          they had the same "Array UUID". why this array tend to two
degraded arrays those had the same uuid? 
          As the two arrays had the same uuid, it is difficult to
distinguish and use them. I think it is unreasonable,
          could you help me ?

ÿôèº{.nÇ+?·?®?­?+%?Ëÿ±éݶ¥?wÿº{.nÇ+?·¥?{±þ¶¢wø§¶?¡Ü¨}©?²Æ zÚ&j:+v?¨þø¯ù®w¥þ?à2?Þ?¨è­Ú&¢)ß¡«a¶Úÿÿûàz¿äz¹Þ?ú+?ù???Ý¢jÿ?wèþf



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux