Matt Mackall <mpm@xxxxxxxxxxx> writes: > This is a relatively simple scheme for making a filesystem with > incremental online consistency checks of both data and metadata. > Overhead can be well under 1% disk space and CPU overhead may also be > very small, while greatly improving filesystem integrity. Problem I see is that your scheme doesn't support metadata checksums only. IMHO those are the most interesting because they have the potential to be basically zero cost, unlike full data checksumming. And doing metadata checksums is enough to handle the fsck problem. I'm sure there are many cases where full checksumming makes sense too, but those shouldn't be forced on everybody because it will slow down some important workloads (like O_DIRECT) Metadata checksums would be best just put into the file systems data structures. Essentially every object (inode, extent, directory entry, super block) should have a checksum that can be incrementially updated. -Andi - To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html