Bad block management in raid1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi There,

Can you conform the below scenario in which blocks are consider to be a "bad" block.

1. A read error on a degraded array ( a state of raid when array experiences the failure of one or more disks)for which the data cannot be found from other legs is a "bad" block and gets recorded. 2. When recovering, from source to target leg, for any reason if unable to read from source, the target leg's block gets recorded as “bad” (thought the target block is writable and can be used in future).
        3. Write to a block fails (Though it leads to degraded mode).

Are they all implemented and is there any other scenario?

When exactly the raid1 decides to make the device "Faulty"? Does that depends on the number of bad blocks in the list ie: 512?
 What is the size in the metadata for storing the bad block info.

Thanks,
Ankur
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux