Re: Filesystem corruption on RAID1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Il 20-08-2017 17:48 Mikael Abrahamsson ha scritto:

This involves manual intervention.

While I don't know how to implement this, let's at least see if we can
architect something for throwing ideas around.

What about having an option for any raid level that would do "repair
on read". So you can do "0" or "1" on this. RAID1 would mean it reads
all stripes and if there is inconsistency, pick one and write it to
all of them. It could also be some kind of IOCTL option I guess. For
RAID5/6, read all data drives, and check parity. If parity is wrong,
write parity.

Wait, is isn't that what MDRAID already do by issuing "echo 1 > sync_action"?

The big plus would be to not blindly copy the first mirror/stripe, rather to identify the correct one and use it to correct any corrupted data.

Obviously you need sufficient data to do that, by the mean of 3-way RAID1, double parity (RAID6) or checksummed data blocks (ZFS, BTRFS and dm-integrity).

Please note that these methods alone do not provide complete protection over other failures mode as phantom writes; however, any of them would significantly increase the current data protection level.

Regards.

--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@xxxxxxxxxx - info@xxxxxxxxxx
GPG public key ID: FF5F32A8
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux