raid5:bad sectors after lost power

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi everyone:
    I have a problem about raid5.
    I created a raid5 with the command “mdadm ?Cv --chuck=128 /dev/md0 ?
l5 ?n4 sd[abcd]” for write performance test, the capacity of each disk is
2TB. After the array was created, I modified the strip_catch_size of it to
2048. 
    Then I used a program to parallel write 150 files to the array, the
speed of each is 1MB/s. Unfortunately, the electricity went off suddenly at
the time. when I turn on the device again, I found the raid5 is in recovery.
When     the progress of the recovery went up to 98%, there was a write
error occurred. I used “smartctl ?A /dev/sd*” to check the heath status
of these disks. I found the “RAW_VALUE” of the attribute whose name was
“Current_Pending_Sector” of sda and sdb were 1.
  Then I used “HDD_Regenerator” to check if there were bad blocks in the
disks. The result of the output indicated that sda and sdb did have a bad
sector. 
    These disks were used for the first time after purchased. Is it normal
to have bad sectors? Could you please help me? 


--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux