On Thu, Nov 16, 2017 at 3:04 AM, Qu Wenruo <quwenruo.btrfs@xxxxxxx> wrote: > For example, if we use the following device mapper layout: > > FS (can be any fs with metadata csum) > | > dm-integrity > | > dm-raid1 > / \ > disk1 disk2 You would instead do dm-integrity per physical device, then make the two dm-integrity devices, members of md raid1 array. Now when integrity fails, basically it's UNC error to raid1 which then gets the copy from the other device. But what you're getting at, that dm-integrity is more complicated, is true, in that it's at least partly COW based in order to get the atomic write guarantee needed to ensure data blocks and csums are always in sync, and reliable. But this also applies to the entire file system. The READ bio concept you're proposing leverages pretty much already existing code, has no write performance penalty or complexity at all, but does miss data for file systems that don't csum data blocks. It's good the file system can stay alive, but data is the much bigger target in terms of percent space on the physical media, and more likely to be corrupt or go missing due to media defect or whatever. It's still possible for silent data corruption to happen. > I just want to make device-mapper raid able to handle such case too. > Especially when most fs supports checksum for their metadata. XFS by default does metadata csums. But ext4 doesn't use it for either metadata or the journal by default still, it is still optional. So for now it mainly benefits XFS. -- Chris Murphy