Re: How mdadm react with disk corruption during check?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/11/2020 10:39, Aymeric wrote:
Hello,

I've searched a bit on the wiki but didn't find any clear answer.

So let assume we have a raid 1 with two disks : sda and sdb.
You can read and write on both disks without I/O error, so no drive are
going to be kicked out the array.
The only stuff is that sda will not read what has been written on some
sectors.

I know that mdadm can not detect integrity during normal usage, and as
read on the array will be performed by chunck randomly on the two disks
you get a partial corrupted reading.

md-raid is not meant to protect against corruption - it protects against disk failure.

Now, during checkarray command, mdadm is reading the whole disk, it will
detect that sda and sdb does not contain the same data (at least I hope
that checkarray is comparing data on both disks).

How does it decide which drive (sda or sdb) have correct data to write
it back the other disks?

It just assumes that sda is correct ...

Is there any messages available in such case?

And I've the same question with raid 1 on 3 disks and same behavior on sda.

I'm pretty the certain it's the same.


When I get it working ... famous last words ... the system I'm building has md-raid on top of dm-integrity. So if you do get corruption, the dm layer should return a read error and trigger a clean-up write. And once that's sorted I'll be trying to integrate it into md-raid as an option.

Cheers,
Wol



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux